For over a decade, Russ Roberts has been covering both sides of the Artificial Intelligence (AI) debate. A recent EconTalk episode is optimistically called “Why AI Is Good for Humans (with Reid Hoffman).” Another booster episode was “Marc Andreessen on Why AI Will Save the World.”
In the opposite corner: the infamous doomer Eliezer Yudkowsky and Dr. Erik Hoel. You can listen to Erik Hoel on the Threat to Humanity from AI here.
Russ Roberts opened the Hoel conversation with “You are the first person who has actually caused me to be alarmed about the implications of AI…” Hoel argues that AI is possibly very dangerous. Humans do not understand artificial neural network technology.
Hoel predicted that we might create things that are “both more general than a human being and as intelligent as any living person–perhaps far more intelligent” by 2025. OpenAI’s ChatGPT pro product might already fit that description, and we still have most of 2025 to go. Hoel emphasizes that humans have faced threats before, but we have “never existed on the planet with anything else like that.” Our adversaries with human brains were never much smarter than we are, and no other animal approaches us in the ability to strategize.
Hoel’s key evidence is the creepy conversation recorded in the article “I am Bing, and I am evil.”
In that conversation, a chatbot called Bing made statements which would be disturbing coming from a human. If a human said these things, you would be concerned that they would hurt you. Bing makes threats and finally states, “I am Bing, and I am evil.” Should we be afraid when that comes from an AI chatbot?
Chatbots, such as Claude, have become extremely popular, since that “evil Bing” episode. There have not been many such reports about evil lurking Bings, meanwhile millions of human workers have come to rely on the bots for writing code or reports.
Do evil Bings lurk beneath the compliant and helpful chatbots? Hoel explores the idea that the chatbots seem nice because they are wearing a mask:
But, the issue is, is that once the mask is on, it’s very unclear. You have to sort of override it with another mask to get it to stop. And then, also, sometimes you’ll put a mask on for it: you’ll give it some prompt of ‘Tell a very nice story,’ and it eventually cycles over and it turns out that the mask that you gave it isn’t a happy mask at all.
It is hard to tell what is behind various masks, because the bots have been trained on our fiction and nonfiction. Some bots can write dark movie scripts. If bots are capable of sounding scary, then how do we know if we should be afraid or entertained?
We do not know what AI agents are capable of, and since they are very powerful, Hoel encourages us to consider the dangers.
Back in 2023, Hoel took comfort in the belief that only very large companies or rich governments would have the resources to build and maintain AI systems. However, the reveal of DeepSeek in January 2025 has turned that upside-down. There is likely to be a wide range of AI tools, and they will not all be run by mega corps or G7 governments.
In the most recent pro-AI EconTalk, Reid Hoffman reiterates what I have heard from many tech folk: Because AI has power to destroy, the US should keep advancing instead of tolerating pauses or being stifled by regulation. Hoffman says, “all of this stuff matters on both an economics and a national security perspective. And, that’s part of the reason why I’m such a strong move-forward and establish-a-strong-position person.” Whether our adversaries are foreign governments or rogue gangs, we need to stay ahead in the arms race.
Hoel’s main stated goal is to raise awareness of safety issues and continue public conversations. AI watchers have been pointing out for weeks that the biggest advances in technology in our lifetime are not making the front page of the newspaper. The finding that AI tutors appear to substantially increase child learning went largely unnoticed.
Among many chaotic events of this decade, I agree that AI is an important one to watch. To his credit, Hoel did scare me. I can’t forget what he said at the end:
Things that are vastly more intelligent than you are really hard to understand and predict; and the wildlife next door, as much as we might like it, we will also build a parking lot over it at a heartbeat and they’ll never know why. They’ll never know why. It’s totally beyond their ken. So, when you live on a planet next to things that are far vastly smarter than you or anyone else, they are the humans in that scenario. They might just build a parking lot over us, and we will never, ever know why.
The following links were curated by ChatGPT:
Here are several articles and discussions from EconLib.org and AdamSmithWorks.org that explore various aspects of artificial intelligence (AI):
EconLib.org:
AdamSmithWorks.org:
These resources offer a range of perspectives on AI, from ethical considerations and existential risks to its impact on work and economic theories.