英语轻松读发新版了,欢迎下载、更新

“AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity

2025-09-01 19:15:59 英文原文
Robot Thinking Artificial Intelligence Technology
Credit: Shutterstock

Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.

The rise of artificial intelligence (AI) has changed how people interact, but it also poses a global risk to human dignity, according to new research from Charles Darwin University (CDU).

Lead author Dr. Maria Randazzo, from CDU’s School of Law, explained that AI is rapidly reshaping Western legal and ethical systems, yet this transformation is eroding democratic principles and reinforcing existing social inequalities.

She noted that current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy, and intellectual property. This shortfall is largely due to the opaque nature of many algorithmic models, which makes their operations difficult to trace.

The black box problem

Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.

Dr Maria Randazzo
Dr. Maria Randazzo has found AI has reshaped Western legal and ethical landscapes at unprecedented speed. Credit: Charles Darwin University

“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.

“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Global approaches to AI governance

Currently, the world’s three dominant digital powers – the United States, China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Dr. Randazzo said the EU’s human-centric approach is the preferred path to protect human dignity, but without a global commitment to this goal, even that approach falls short.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.

“Humankind must not be treated as a means to an end.”

Reference: “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” by Maria Salvatrice Randazzo and Guzyal Hill, 23 April 2025, Australian Journal of Human Rights.
DOI: 10.1080/1323238X.2025.2483822

The paper is the first in a trilogy Dr. Randazzo will produce on the topic.

Never miss a breakthrough: Join the SciTechDaily newsletter.

关于《“AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity》的评论


暂无评论

发表评论

摘要

Opaque AI systems pose a global risk to human dignity and rights, according to research from Charles Darwin University. The opaque nature of many algorithmic models makes it difficult to trace their operations, leading to a "black box problem" where decisions made by AI cannot be traced or understood by humans. This lack of transparency hinders the protection of basic human rights such as privacy and freedom from discrimination. Dr. Maria Randazzo emphasizes the need for global cooperation and regulation to ensure that AI development respects human dignity and does not undermine democratic principles or reinforce social inequalities. The EU's human-centric approach is highlighted as a preferred path, but a universal commitment is necessary for effective protection against the risks posed by opaque AI systems.

相关新闻