英语轻松读发新版了,欢迎下载、更新

Colleagues or overlords? The debate over AI bots has been raging but needn’t

2025-06-23 09:30:09 英文原文

作者:Catherine Thorbecke

There’s the Terminator school of perceiving artificial intelligence (AI) risks, in which we’ll all be killed by our robot overlords. And then there’s one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. 

In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. 

Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet’s DeepMind—sent shockwaves with a statement that warned: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." 

Also Read: AI didn’t take the job. It changed what the job is.

Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI ‘accelerationists’ largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused.

But maybe we’ve been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs “live in symbiosis and flourish together, protecting each other’s well-being and averting catastrophic risks."

Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can’t beat AI from becoming smarter than us, we’re better off joining it as an equal partner. 

‘Equality’ is the sensitive part. Humans want to keep believing they are superior, not equal to machines.

Also Read: Rahul Matthan: AI models aren’t copycats but learners just like us

His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet.

Beyond pop culture, there’s some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. 

It’s likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur ‘civilization extinction’ remains an extremely heated debate. 

It’s fair to say that some of the industry’s focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that’s job displacement, allegations of copyright infringement or reneging on climate change goals.

Still, Yamakawa’s proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can’t just rely on eyebrow-raising warnings and the absence of governance. 

Also Read: You're absolutely right, as the AI chatbot says

With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can’t afford to turn a blind eye until it’s too late.

It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. 

Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it’s yet to be determined whether we’ll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords.   ©Bloomberg

The author is a Bloomberg Opinion columnist covering Asia tech.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

关于《Colleagues or overlords? The debate over AI bots has been raging but needn’t》的评论


暂无评论

发表评论

摘要

A Japanese AI researcher argues that achieving a symbiotic relationship between humans and super-intelligent machines is crucial for global AI safety. This perspective contrasts with earlier calls for a halt in development due to existential risks, suggesting instead that coexistence and partnership are more viable paths forward. The proposal has garnered attention in academic circles but faces skepticism given the ongoing technological acceleration and cultural biases against viewing machines as equals.

相关新闻