Bay Area researchers argue that tech industry is 'careening toward disaster'

2025-09-23 21:56:50 英文原文

作者:Stephen Council

FILE: San Francisco’s skyline shows in October 2022 behind Interstate 80, where various companies now peddle their AI products on large billboards. 

FILE: San Francisco’s skyline shows in October 2022 behind Interstate 80, where various companies now peddle their AI products on large billboards. 

Justin Sullivan/Getty Images

Drive on Interstate 80 in San Francisco and you’re bound to see them: billboards of various colors and sizes peddle the products of artificial intelligence’s hype wave. But this month, a contrarian viewpoint popped up on the skyline — an ad for a book that instead of lauding AI, poses it as an “extinction-level threat” to humanity.

The billboard shows the book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” and the book’s Sept. 16 publication date. It’s a declarative, doom-mongering text by computer scientists Eliezer Yudkowsky and Nate Soares of Berkeley’s Machine Intelligence Research Institute, or MIRI. They make the divisive argument that the AI industry’s current path will end with humanity in peril.

Article continues below this ad

The book’s introduction has its thesis claim: “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.”

The billboard for “If Anyone Builds It, Everyone Dies” stands beside I-80 in San Francisco on Sept. 19, 2025.

The billboard for “If Anyone Builds It, Everyone Dies” stands beside I-80 in San Francisco on Sept. 19, 2025.

Stephen Council/SFGATE

These days, Yudkowsky is the most prominent voice in the “AI doom” movement, with his new book joining a long line of attempts to wake the public up to his view on AI’s threats. But he’s also tipped some key dominoes in the past few decades of the technology’s development. He founded the precursor to MIRI in 2000, helping to propagate the idea that superintelligent AI could soon exist. And Wall Street Journal reporter Keach Hagey reported that in 2010, Yudkowsky introduced key AI researcher Demis Hassabis to financier Peter Thiel, leading to Thiel’s $2.25 million investment in Hassabis’ nascent DeepMind startup — a team now at the core of Google’s AI development.

OpenAI CEO Sam Altman pointed to that DeepMind anecdote in a 2023 tweet where he opined that Yudkowsky had done more than anyone else to drive the push toward artificial general intelligence — a term for an AI with human-level capabilities in learning and acting. Altman wrote of Yudkowsky: “certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.”

Article continues below this ad

But at this point, Altman is one of the executives that Yudkowsky is afraid of. OpenAI, Meta, Anthropic, xAI and other companies are racing to make more and more advanced AI systems, pouring unprecedented sums of money into the massive data centers needed to train such models. Mark Zuckerberg wrote in July that progress was “undeniable,” and “developing superintelligence is now in sight.”

Executives have painted AI progress as a rush toward better tools that will make people healthier and more prosperous. In “If Anyone Builds It, Everyone Dies,” Yudkowsky and Soares write that this race looks “to us like a race to the bottom,” and that the industry is “careening toward disaster.” 

They argue that AI researchers are “growing,” not “crafting,” cutting-edge AI models, ignorant of how precisely the inscrutable technology works — and thus, that it isn’t possible to truly steer it in safe directions. The authors posit that a superintelligent AI would seek a world where matter and energy go toward its own means, not toward human flourishing. And they suggest that an AI with enough power could, for example, kick off a deadly plague to prevent a competitor’s AI model from being built. 

The book says that the “problem is out of humanity’s league,” and paints more AI development as a nakedly unsafe risk: “It is not even worth taking extra steps into the AI minefield, guessing that each step might not kill us, until finally one step does.”

Article continues below this ad

Yudkowsky and Soares, next to the book on their San Francisco billboard, allude to the book’s bleak thesis. It says in all caps, “We wish we were exaggerating.”

Work at a Bay Area tech company and want to talk? Contact tech reporter Stephen Council securely at stephen.council@sfgate.com or on Signal at 628-204-5452.

关于《Bay Area researchers argue that tech industry is 'careening toward disaster'》的评论


暂无评论

发表评论

摘要

Billboards along San Francisco's Interstate 80 feature a book titled "If Anyone Builds It, Everyone Dies," authored by computer scientists Eliezer Yudkowsky and Nate Soares, which warns of AI as an existential threat to humanity. The book argues that any company developing superhuman AI using current techniques would lead to global disaster. This contrarian viewpoint contrasts with the prevalent optimism in the tech industry about the benefits of advanced AI systems.