With a long track record of groundbreaking research, outspoken opinions, and grand visions, Jürgen Schmidhuber is a unique figure in the artificial intelligence world — equal parts scientist and provocateur.
Schmidhuber is widely recognized as one of the early trailblazers of deep learning and neural network research. Most of his career has focused on the development of AI architecture such as long short-term memory (LSTM) networks, which are fundamental in language modeling, speech recognition, and AI-driven applications. Those advances, made by Schmidhuber and his colleagues, are incorporated in everything we do today on our devices — think personal assistants like Apple’s Siri or Amazon’s Alexa.
Schmidhuber served as the scientific director of IDSIA, the AI lab based in the Swiss city of Lugano, where he has spent a significant part of his career and also taught AI at the University of Lugano. In 2014, he teamed up with his former students and co-founded a startup called Nnaisense, to work on general-purpose artificial intelligence. He has authored more than 350 peer-reviewed papers and has been recognized for his work, and won the Neural Networks Pioneer award by the IEEE Computational Intelligence Society.
But he is not without controversy. Schmidhuber has openly accused other prominent AI figures of plagiarizing his team’s work. Last year, he wrote a long post on X, accusing the Nobel Prizes of rewarding plagiarism and misattribution.
He also often breaks ranks with the tech community when it comes to speaking about the dangers of AI: He argues that claims of AI’s existential threat to humanity are overblown. Two years ago, he refused to sign the open letter “Pause Gigantic AI Experiments,” which argued for ethics in AI and was endorsed by leading scientists and tech founders, including Elon Musk.
Schmidhuber pointed to two issues with the letter: “If you ask 10 different people, you will get 10 different opinions about what is ‘good’ and ‘ethically acceptable,’ he said. “Some of these opinions will be completely irreconcilable. I predicted that this letter would have no significant impact because many AI researchers, companies, and governments would ignore it completely. And I was right.” He’s also consistently said that “many of those who publicly warn of the dangers of AI are just looking for publicity.”
In 2021, Schmidhuber was appointed director of artificial intelligence at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, where he is currently based. His work, in addition to leading the computer science program, focuses on research on AI applications across several sectors, including health care, chemistry, speech recognition, robotics, and automation. His arrival coincides with Saudi Arabia’s push to become a global tech and AI hub, even as China and the U.S. jostle for global influence. Since launching a blueprint to modernize its economy and society, known as Vision 2030, Saudi Arabia has committed to and launched ambitious infrastructure and futuristic projects, adding up to more than a trillion dollars. “Yes, it will cost money, but there’s a lot of money in this country,” Schmidhuber told The New York Times last year.
Schmidhuber and I conducted this interview over email, at his request. He asked that I only email him one question at a time, so we could get “a continually growing interview with a natural and logical flow.” The following conversation, which has been lightly edited and condensed, is from our exchange.
It has been said that AI is the new oil or the new electricity. And the once nimble and highly profitable software companies, like Microsoft and Google, became almost like utilities, investing in nuclear energy, among other things, to run AI data centers. Open Source and DeepSeek have once again shown that such companies have no moat in the field of AI.
In many talks over the last decade, I mention a rich guy I knew when I was young. He had a Porsche with something incredible: a mobile phone. He could call other people via satellite who also had such a Porsche. But 40 years later, everyone had a cheap smartphone in their pocket, which was in many ways better than what he had in his Porsche.
It’s the same with AI. Every five years, computing becomes 10 times cheaper — a trend that has continued since the first general-purpose computer was completed in 1941. The basic techniques of modern AI were developed in the previous millennium when computing was still very expensive, but this trend has made them so cheap that AI has been on your smartphone since the 2010s.
The trend will not break in the coming decades. An AI data center that is worth $500 billion today will only be worth $50 billion in five years, and $5 billion in 10 years. Soon, smaller, cheaper computers with increasingly efficient open-source AI software will do what the large data centers do today. So AI will not be controlled by a few big AI utilities. Everyone will own cheap but powerful and transparent AI that improves their lives in many ways. See the old motto of our company Nnaisense, founded in 2014: “AI∀” or “AI For All.”
Well, almost all the basic techniques of modern AI, based on deep-learning artificial neural networks, were developed in Europe in the last millennium. Some of them were developed in Japan. But commercial applications are now dominated by the U.S. and China. However, anyone in the rest of the world can download or re-implement the old algorithms, improve them, and adapt them to country-specific applications. AI for All! As always in business, those who are good at balancing costs and profits will thrive.
I don’t think the average person is fearful of AI … They just prefer to watch a silly dystopian Schwarzenegger AI movie rather than a documentary on AI that improves healthcare.
I don’t think the average person is fearful of AI. Most people can name beneficial applications of AI. They just prefer to watch a silly dystopian [Arnold] Schwarzenegger AI movie rather than a documentary on AI that improves health care. It’s about entertainment. Since E.T.A. Hoffmann’s 1816 novel about a beautiful humanoid robot, science fiction stories about AI have focused on dystopias. And some of today’s AI researchers seeking publicity also exploit the clickbait appeal of doomsday scenarios.
Many are talking about AGI [artificial general intelligence] now. It will come one day, but not yet. The only AI that works well today is AI in the virtual world behind the screen — for example, for automatically summarizing documents, creating images, programs, and PowerPoint slides. AI in the physical world is much harder than that. Passing the Turing test behind a screen, or solving certain complicated math problems, is easy compared to Real AI for real robots in the real world. There is no robot that can do what a plumber can do. So much remains to be done. And few are talking about that. In 2014, we founded a research company to develop real AI in the physical world, but even today, there aren’t many such companies. Real AI is hard. I think it is going to be the next big step.
Another important point missing from the debate is that we don’t just have supervised AI that slavishly imitates human teachers or pursues human-defined goals. Since 1990 in my lab, we have also had artificial neural networks that set their own goals, ask their own questions, and invent their own experiments to find out how the world works, building world models, just like babies do. I’ve called this artificial curiosity and creativity, and with continued hardware acceleration, it will scale, just as the conceptually simpler and less general large language models have scaled. Without the freedom to set their own goals and invent their own problems, AIs won’t become very intelligent. But how do you keep them focused on human goals? Few are talking about this.
The pragmatic answer is: We have to raise such systems as we raise our children. We can’t prove that our children won’t go mad forever. But we can greatly increase the likelihood that they will become useful members of our society by occasionally giving them externally imposed tasks, and by punishing (rewarding) them for bad (good) behavior.
On the other hand, curious systems are very difficult to predict, and it is difficult or impossible to prove that curiosity won’t kill the cat or others. That’s why I’ve spent a lot of time trying to spread the word about such systems, to make people aware that they already exist, and to think about their possible consequences.
I was lazy as a teenager. I tried to do homework and other problems with minimal effort. If something was inexplicable to me, I looked for the simplest possible explanation. So it was only natural that in the 1970s I developed the ambition to find the simplest explanation for the entire universe and also to build an AI that could solve all solvable problems within my lifetime so that I could retire. This actually explains all my later works, for example, the fastest algorithm to compute all computable universes, theoretically optimal AGI like the Gödel machine, and all those deep-learning neural networks for practical AI, for example, the 1991 foundations for the G and the P and the T in ChatGPT.
A single person cannot build an AI from scratch. You need a whole civilization. You need people to create the algorithms, the basic learning methods, and so on. But then you also need the computers to run those learning algorithms on. And then you need the people who design and speed up the computers. My survey, “Annotated History of Modern AI and Deep Learning,” mentions some of the pioneers on whose shoulders we stand. And then you need the people who take the ore out of the ground to build these computers, and then you need the farmers who produce the food to feed all these people who are doing this. And then, of course, you need the customers, like video game players, because otherwise there wouldn’t be a market for these computers, which means there wouldn’t be an incentive to build them. Again, you need a whole civilization to build an AI.
Saudi Arabia has come a long way in just a few years. Hardly any other country has such a high first derivative of change, which makes it an extremely interesting place. In particular, the country has six times more people than it did 50 years ago. Most of them are young and optimistic; many believe that AI will change their lives for the better. Remarkably, there are suddenly huge opportunities for women, and the percentage of startups with female founders is higher than in Europe or California.
My own university on the shores of the Red Sea is not far from the origins of human civilization, and is full of brilliant academics from all over the world: There are more than 100 nationalities, who enjoy excellent working conditions and a high quality of life. At KAUST, we can pursue the AI-related research that we consider most important.
As academics, we usually publish our results for all to see and open-source our AI code so that the whole world can benefit from Saudi funding, not just the locals. We hope that our work on AI will contribute to a new golden age for science, analogous to the Islamic Golden Age that began over a millennium ago, when the Middle East led the world in science and technology — especially in automatic information processing.
Vision 2030 is bold. My own main goal in life is still the old one from my teenage years: to create an AI that’s truly smarter than me in every significant way. We are not there yet. Because it would have such far-reaching consequences, it’s compatible with pretty much any vision of unlimited future prosperity through AI.
The kingdom and its Public Investment Fund (PIF) are willing to invest enormous sums into AI. There is a major Saudi program called Economies of the Future, which includes the development of smart cities, space exploration, quantum computing, and AI. The focus of our own KAUST Generative AI Center is, as you might have guessed, generative AI. My team’s AI research is of a fundamental nature, applicable to virtually all fields.
KAUST researchers are currently developing an AI chemist that understands chemistry by “reading” chemistry papers, patents, and other documents, and by predicting computational properties of molecules and reactions. This AI chemist is being fine-tuned to handle real-world chemical reactions by integrating it into a robotic chemistry lab that fully automates the chemical reaction process — no humans needed to conduct the reaction. As such, the AI chemist can discover new reactions and create new materials with particular desirable properties that were not available before. This includes discovering new chemical reactions that maximize throughput or minimize waste, as well as discovering new materials that perform direct air carbon capture. By enabling such artificial scientists to manage their own labs, the process of scientific discovery will be revolutionized.
Simple AI chemists are already being used in many labs around the world to design novel drugs.
Most people would agree that printing and reading are essential to modern society. It will be the same for AI.
At present, some humans are still more creative than even the most creative machines. However, as computing becomes 10 times cheaper every five years, the artificial scientists and artists will scale, and in the not-so-distant future, the most creative and influential scientists will no longer be human.
This question reminds me of what happened half a millennium ago. After Gutenberg’s printing press, people began to learn less from physical experience in the real world and instead relied on these suddenly widely available printed texts for almost every aspect of their lives. In particular, [Martin] Luther used the new technology to launch the only major religious movement of the last 1,000 years: the Reformation. Then the Catholics started using the printing press to claim that Luther was spreading fake news. Then scientists started using it to promote other kinds of ideas. Today, most people would agree that printing and reading are essential to modern society. It will be the same for AI.
I am not sure exactly what “ethical AI” is. Ethical standards seem to depend a lot on circumstances, place, and time. Insiders have told me that the artificial neural networks invented in my lab are not only improving health care, but also controlling military drones. Is that unethical? Many Ukrainians would argue that defense against the Russian war machine justifies AI-controlled drones.
And here’s my old trivial example from the 1980s, when self-driving cars appeared in my hometown of Munich: Similar machines can also be used by the military, for example, as self-driving land mine detectors, and many would argue that this is an ethically acceptable military use of AI. I have signed letters warning against AI weapons. But I didn’t sign “Stop Gigantic AI Experiments: An Open Letter.” It uses the word “we” a lot and refers to “us,” the humans. But as I have pointed out many times in the past, there is no “we” that everyone can identify with. Ask 10 different people and you will get 10 different opinions about what is “good” and “ethically acceptable.” I predicted that this letter would have no significant impact because many AI researchers, companies, and governments would ignore it completely. And I was right.
KAUST is actually modeled on U.S. universities. A former Caltech president was also president of KAUST, and the university fully complies with U.S. export controls. And the most valuable U.S. companies seem to be very happy with the work of my team in the past four decades or so, because they have used it a lot to serve the AI needs of billions of users all over the world.
For many years, I have been predicting that our AI will continue to make people’s lives longer, healthier, easier, and happier.