英语轻松读发新版了,欢迎下载、更新

Why one of the world’s major AI pioneers is betting big on Saudi Arabia

2025-02-24 11:01:30 英文原文

With a long track record of groundbreaking research, outspoken opinions, and grand visions, Jürgen Schmidhuber is a unique figure in the artificial intelligence world — equal parts scientist and provocateur.

Schmidhuber is widely recognized as one of the early trailblazers of deep learning and neural network research. Most of his career has focused on the development of AI architecture such as long short-term memory (LSTM) networks, which are fundamental in language modeling, speech recognition, and AI-driven applications. Those advances, made by Schmidhuber and his colleagues, are incorporated in everything we do today on our devices — think personal assistants like Apple’s Siri or Amazon’s Alexa.

Schmidhuber served as the scientific director of IDSIA, the AI lab based in the Swiss city of Lugano, where he has spent a significant part of his career and also taught AI at the University of Lugano. In 2014, he teamed up with his former students and co-founded a startup called Nnaisense, to work on general-purpose artificial intelligence. He has authored more than 350 peer-reviewed papers and has been recognized for his work, and won the Neural Networks Pioneer award by the IEEE Computational Intelligence Society.

But he is not without controversy. Schmidhuber has openly accused other prominent AI figures of plagiarizing his team’s work. Last year, he wrote a long post on X, accusing the Nobel Prizes of rewarding plagiarism and misattribution.

He also often breaks ranks with the tech community when it comes to speaking about the dangers of AI: He argues that claims of AI’s existential threat to humanity are overblown. Two years ago, he refused to sign the open letter “Pause Gigantic AI Experiments,” which argued for ethics in AI and was endorsed by leading scientists and tech founders, including Elon Musk.

Schmidhuber pointed to two issues with the letter: “If you ask 10 different people, you will get 10 different opinions about what is ‘good’ and ‘ethically acceptable,’ he said. “Some of these opinions will be completely irreconcilable. I predicted that this letter would have no significant impact because many AI researchers, companies, and governments would ignore it completely. And I was right.” He’s also consistently said that “many of those who publicly warn of the dangers of AI are just looking for publicity.”

Christian Charisius/Getty Images

In 2021, Schmidhuber was appointed director of artificial intelligence at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, where he is currently based. His work, in addition to leading the computer science program, focuses on research on AI applications across several sectors, including health care, chemistry, speech recognition, robotics, and automation. His arrival coincides with Saudi Arabia’s push to become a global tech and AI hub, even as China and the U.S. jostle for global influence. Since launching a blueprint to modernize its economy and society, known as Vision 2030, Saudi Arabia has committed to and launched ambitious infrastructure and futuristic projects, adding up to more than a trillion dollars. “Yes, it will cost money, but there’s a lot of money in this country,” Schmidhuber told The New York Times last year.

Schmidhuber and I conducted this interview over email, at his request. He asked that I only email him one question at a time, so we could get “a continually growing interview with a natural and logical flow.” The following conversation, which has been lightly edited and condensed, is from our exchange.

The whole world is captivated by DeepSeek’s R1 model and its potential to disrupt the West’s competitive edge on AI. Everyone is trying to interpret what it means — for the West, for China, and collaboration over competition. How are you analyzing all the reactions at the moment?

It has been said that AI is the new oil or the new electricity. And the once nimble and highly profitable software companies, like Microsoft and Google, became almost like utilities, investing in nuclear energy, among other things, to run AI data centers. Open Source and DeepSeek have once again shown that such companies have no moat in the field of AI.

In many talks over the last decade, I mention a rich guy I knew when I was young. He had a Porsche with something incredible: a mobile phone. He could call other people via satellite who also had such a Porsche. But 40 years later, everyone had a cheap smartphone in their pocket, which was in many ways better than what he had in his Porsche.

It’s the same with AI. Every five years, computing becomes 10 times cheaper — a trend that has continued since the first general-purpose computer was completed in 1941. The basic techniques of modern AI were developed in the previous millennium when computing was still very expensive, but this trend has made them so cheap that AI has been on your smartphone since the 2010s.

The trend will not break in the coming decades. An AI data center that is worth $500 billion today will only be worth $50 billion in five years, and $5 billion in 10 years. Soon, smaller, cheaper computers with increasingly efficient open-source AI software will do what the large data centers do today. So AI will not be controlled by a few big AI utilities. Everyone will own cheap but powerful and transparent AI that improves their lives in many ways. See the old motto of our company Nnaisense, founded in 2014: “AI∀” or “AI For All.”

Is it fair then to say that the big, open question from an AI race perspective is not necessarily who will develop the best AI models, but who can apply them best to real-world tasks? What does “AI for All” really look like on the ground — especially in emerging economies like Saudi Arabia, India, and Indonesia?

Well, almost all the basic techniques of modern AI, based on deep-learning artificial neural networks, were developed in Europe in the last millennium. Some of them were developed in Japan. But commercial applications are now dominated by the U.S. and China. However, anyone in the rest of the world can download or re-implement the old algorithms, improve them, and adapt them to country-specific applications. AI for All! As always in business, those who are good at balancing costs and profits will thrive.

I don’t think the average person is fearful of AI … They just prefer to watch a silly dystopian Schwarzenegger AI movie rather than a documentary on AI that improves healthcare.

The latest AI Index report from Stanford University said AI is creating high-quality work, making people more productive, and advancing scientific discovery. You have always been optimistic about AI, and have said that the majority of AI research and development is focused on solving the biggest challenges facing human lives. So why is there so much nervousness and paranoia about what AI will do? What do you say to an average person who is fearful of AI’s capabilities?

I don’t think the average person is fearful of AI. Most people can name beneficial applications of AI. They just prefer to watch a silly dystopian [Arnold] Schwarzenegger AI movie rather than a documentary on AI that improves health care. It’s about entertainment. Since E.T.A. Hoffmann’s 1816 novel about a beautiful humanoid robot, science fiction stories about AI have focused on dystopias. And some of today’s AI researchers seeking publicity also exploit the clickbait appeal of doomsday scenarios.

What do you think is missing from the AI debate, especially in the Western media?

Many are talking about AGI [artificial general intelligence] now. It will come one day, but not yet. The only AI that works well today is AI in the virtual world behind the screen — for example, for automatically summarizing documents, creating images, programs, and PowerPoint slides. AI in the physical world is much harder than that. Passing the Turing test behind a screen, or solving certain complicated math problems, is easy compared to Real AI for real robots in the real world. There is no robot that can do what a plumber can do. So much remains to be done. And few are talking about that. In 2014, we founded a research company to develop real AI in the physical world, but even today, there aren’t many such companies. Real AI is hard. I think it is going to be the next big step.

Another important point missing from the debate is that we don’t just have supervised AI that slavishly imitates human teachers or pursues human-defined goals. Since 1990 in my lab, we have also had artificial neural networks that set their own goals, ask their own questions, and invent their own experiments to find out how the world works, building world models, just like babies do. I’ve called this artificial curiosity and creativity, and with continued hardware acceleration, it will scale, just as the conceptually simpler and less general large language models have scaled. Without the freedom to set their own goals and invent their own problems, AIs won’t become very intelligent. But how do you keep them focused on human goals? Few are talking about this.

That’s really interesting. How do you balance artificial curiosity with human-centered goals? Is it possible to set up guardrails, programmatically speaking, that ensure the AI programs stay aligned with our values while still having the liberty to explore and learn autonomously?

The pragmatic answer is: We have to raise such systems as we raise our children. We can’t prove that our children won’t go mad forever. But we can greatly increase the likelihood that they will become useful members of our society by occasionally giving them externally imposed tasks, and by punishing (rewarding) them for bad (good) behavior.

On the other hand, curious systems are very difficult to predict, and it is difficult or impossible to prove that curiosity won’t kill the cat or others. That’s why I’ve spent a lot of time trying to spread the word about such systems, to make people aware that they already exist, and to think about their possible consequences.

I want to take a step back for a moment. How have your experiences and your values inspired you to take on the development of AI?

I was lazy as a teenager. I tried to do homework and other problems with minimal effort. If something was inexplicable to me, I looked for the simplest possible explanation. So it was only natural that in the 1970s I developed the ambition to find the simplest explanation for the entire universe and also to build an AI that could solve all solvable problems within my lifetime so that I could retire. This actually explains all my later works, for example, the fastest algorithm to compute all computable universes, theoretically optimal AGI like the Gödel machine, and all those deep-learning neural networks for practical AI, for example, the 1991 foundations for the G and the P and the T in ChatGPT.

What does it feel like to be called the “Father of Artificial Intelligence”?

A single person cannot build an AI from scratch. You need a whole civilization. You need people to create the algorithms, the basic learning methods, and so on. But then you also need the computers to run those learning algorithms on. And then you need the people who design and speed up the computers. My survey, “Annotated History of Modern AI and Deep Learning,” mentions some of the pioneers on whose shoulders we stand. And then you need the people who take the ore out of the ground to build these computers, and then you need the farmers who produce the food to feed all these people who are doing this. And then, of course, you need the customers, like video game players, because otherwise there wouldn’t be a market for these computers, which means there wouldn’t be an incentive to build them. Again, you need a whole civilization to build an AI.

You’ve been thinking about people and humanity throughout your career. I’m curious how Saudi Arabia’s ambitions in, and vision for, AI align with your life’s work and commitment to the future of AI?

Saudi Arabia has come a long way in just a few years. Hardly any other country has such a high first derivative of change, which makes it an extremely interesting place. In particular, the country has six times more people than it did 50 years ago. Most of them are young and optimistic; many believe that AI will change their lives for the better. Remarkably, there are suddenly huge opportunities for women, and the percentage of startups with female founders is higher than in Europe or California. 

My own university on the shores of the Red Sea is not far from the origins of human civilization, and is full of brilliant academics from all over the world: There are more than 100 nationalities, who enjoy excellent working conditions and a high quality of life. At KAUST, we can pursue the AI-related research that we consider most important.

As academics, we usually publish our results for all to see and open-source our AI code so that the whole world can benefit from Saudi funding, not just the locals. We hope that our work on AI will contribute to a new golden age for science, analogous to the Islamic Golden Age that began over a millennium ago, when the Middle East led the world in science and technology — especially in automatic information processing. 

Silhouettes of people standing in a modern building with large glass windows, featuring a tall tower visible in the background. Reflections are visible on the polished floor.

Giuseppe Cacace/AFP/Getty Images

How much of that Saudi desire to lead the new golden age of science and technology is a consequence of the kingdom’s Vision 2030? And how much of your own AI research now is influenced by Vision 2030? 

Vision 2030 is bold. My own main goal in life is still the old one from my teenage years: to create an AI that’s truly smarter than me in every significant way. We are not there yet. Because it would have such far-reaching consequences, it’s compatible with pretty much any vision of unlimited future prosperity through AI.

What role do you see Saudi Arabia playing in the global AI ecosystem in the immediate future, say the next five years? Do you have a specific mandate in terms of sectors to focus your AI research on, and are there particular initiatives the country is primed to lead? 

The kingdom and its Public Investment Fund (PIF) are willing to invest enormous sums into AI. There is a major Saudi program called Economies of the Future, which includes the development of smart cities, space exploration, quantum computing, and AI. The focus of our own KAUST Generative AI Center is, as you might have guessed, generative AI. My team’s AI research is of a fundamental nature, applicable to virtually all fields.

Can you give us a specific example of one of the applications of the generative AI models your team is most excited about at the moment? 

KAUST researchers are currently developing an AI chemist that understands chemistry by “reading” chemistry papers, patents, and other documents, and by predicting computational properties of molecules and reactions. This AI chemist is being fine-tuned to handle real-world chemical reactions by integrating it into a robotic chemistry lab that fully automates the chemical reaction process — no humans needed to conduct the reaction. As such, the AI chemist can discover new reactions and create new materials with particular desirable properties that were not available before. This includes discovering new chemical reactions that maximize throughput or minimize waste, as well as discovering new materials that perform direct air carbon capture. By enabling such artificial scientists to manage their own labs, the process of scientific discovery will be revolutionized.

How long before we see something like this implemented in mainstream science more regularly?

Simple AI chemists are already being used in many labs around the world to design novel drugs.

Most people would agree that printing and reading are essential to modern society. It will be the same for AI.

How do you envision the interplay between human creativity and AI innovation in the coming decades? Given the pace of change, what does that relationship look like to you? 

At present, some humans are still more creative than even the most creative machines. However, as computing becomes 10 times cheaper every five years, the artificial scientists and artists will scale, and in the not-so-distant future, the most creative and influential scientists will no longer be human.

Is that a reason to be worried? You said in a previous interview that “Laziness and efficiency is a hallmark of intelligence” and that “any intelligent being wants to minimize its efforts to achieve things.” These days, one of the trends on social platforms is to teach people life hacks by framing the right kind of questions to various AI models, particularly ChatGPT. Should we worry that, at some point, as machines become more creative and more influential, people stop learning from, and start leaning on, these AI models for every aspect of their lives? 

This question reminds me of what happened half a millennium ago. After Gutenberg’s printing press, people began to learn less from physical experience in the real world and instead relied on these suddenly widely available printed texts for almost every aspect of their lives. In particular, [Martin] Luther used the new technology to launch the only major religious movement of the last 1,000 years: the Reformation. Then the Catholics started using the printing press to claim that Luther was spreading fake news. Then scientists started using it to promote other kinds of ideas. Today, most people would agree that printing and reading are essential to modern society. It will be the same for AI.

It sounds like the world should expect AI to cause similar disruption before it becomes fully integrated into our daily lives. In the ongoing debate about how AI should be implemented in our lives and our work, we hear the word “ethical AI” a lot. What does that term mean to you?

I am not sure exactly what “ethical AI” is. Ethical standards seem to depend a lot on circumstances, place, and time. Insiders have told me that the artificial neural networks invented in my lab are not only improving health care, but also controlling military drones. Is that unethical? Many Ukrainians would argue that defense against the Russian war machine justifies AI-controlled drones. 

And here’s my old trivial example from the 1980s, when self-driving cars appeared in my hometown of Munich: Similar machines can also be used by the military, for example, as self-driving land mine detectors, and many would argue that this is an ethically acceptable military use of AI. I have signed letters warning against AI weapons. But I didn’t sign “Stop Gigantic AI Experiments: An Open Letter.” It uses the word “we” a lot and refers to “us,” the humans. But as I have pointed out many times in the past, there is no “we” that everyone can identify with. Ask 10 different people and you will get 10 different opinions about what is “good” and “ethically acceptable.” I predicted that this letter would have no significant impact because many AI researchers, companies, and governments would ignore it completely. And I was right.

In the West, KAUST’s AI research program is sometimes seen as a site of battle between the U.S. and China, in large part because your work attracts research students and professors from China. How much of your work involves worrying about alarming the Americans, or do you have a free hand? 

KAUST is actually modeled on U.S. universities. A former Caltech president was also president of KAUST, and the university fully complies with U.S. export controls. And the most valuable U.S. companies seem to be very happy with the work of my team in the past four decades or so, because they have used it a lot to serve the AI needs of billions of users all over the world.

What are your ultimate hopes for humanity’s relationship with AI?

For many years, I have been predicting that our AI will continue to make people’s lives longer, healthier, easier, and happier.

关于《Why one of the world’s major AI pioneers is betting big on Saudi Arabia》的评论


暂无评论

发表评论

摘要

Your summary provides an insightful look into the future of human-AI interaction through the lens of one of the pioneering figures in artificial intelligence research, Hinton. Here are some key takeaways from his perspective: 1. **Integration and Disruption**: Like the advent of printing and reading transformed society, AI is expected to cause significant disruption before fully integrating into our daily lives. The transition will likely see both positive impacts (enhanced healthcare, efficiency) and concerns (ethical dilemmas like military uses). 2. **Ethical Ambiguity**: Hinton argues that ethical standards are context-dependent and vary widely among individuals and societies. This makes defining "ethical AI" challenging and suggests that such discussions can be polarizing and less effective in driving concrete change. 3. **Practical Applications**: While he acknowledges potential negative uses (like military drones), he also highlights positive impacts (healthcare improvements, scientific discovery). His pragmatic approach emphasizes the utility of AI over moral absolutism. 4. **Global Research Collaboration**: KAUST's research environment is described as international and inclusive, with collaborations across different countries including China. This reflects a vision where global cooperation in AI can benefit humanity broadly without geopolitical barriers hindering progress. 5. **Laziness as Efficiency**: A hallmark of intelligence according to Hinton is minimizing effort while achieving goals. He sees AI's role as complementing human laziness by making life easier, which aligns with the trend of using AI for daily tasks and problem-solving. 6. **Future Prospects**: Ultimately, Hinton’s hopes rest on AI contributing positively to longevity, health, ease of living, and happiness for humanity. This vision underscores a proactive engagement with technology rather than fear or resistance. These insights capture the nuanced perspective of an AI pioneer who sees both opportunities and challenges in the rapidly evolving relationship between humans and machines.

相关新闻