英语轻松读发新版了,欢迎下载、更新

United States, China or Russia: Who writes the moral code for artificial intelligence?

2025-08-04 17:02:12 英文原文

作者:Nicholas Morieson

Who decides what values are embedded in artificial intelligence? This question may soon matter more than whether AI takes your job. Generative AI systems such as ChatGPT, Claude, Gemini and Grok are becoming the default gateway to knowledge. As people turn to them first (and often last) for information, these systems will shape what can be said and, over time, what can be thought. Their answers reflect values embedded in AI systems that determine which perspectives are amplified, which are silenced, and how political events are framed.

In the United States and Europe, these values are contested but broadly shaped by traditions of individual rights, pluralism and free expression. Companies differ in terms of the values they embed in AI. For example, OpenAI’s ChatGPT is cautious and rights‑oriented, whereas Elon Musk’s Grok takes a more libertarian, free‑speech‑maximalist approach. All, however, operate within America’s political culture.

The European Union broadly shares America’s liberal-democratic values, but is wary of US dominance in AI.

At the same time, Washington increasingly frames AI leadership as a front line in a geopolitical and civilisational contest. Success is tied to safeguarding “American values”, including free expression and human rights, and successive administrations have held a firm belief that American private companies will out‑innovate authoritarian nations and their state‑centric AI development models.

America’s National Security Commission on Artificial Intelligence (NSCAI), formed in 2018 to address “national security and defence needs”, frames AI rivalry as a global “values competition” to be “embraced”, explicitly naming China. In 2021, bipartisan Senate bills followed, with Senators Martin Heinrich and Rob Portman telling the National Science Foundation: “AI leadership by the United States is only possible if AI research, innovation, and use is rooted in American values … ethics and safety”.

The European Union broadly shares America’s liberal-democratic values, but is wary of US dominance in AI. Brussels argues American AI systems reflect distinctly US priorities, especially Silicon Valley’s commercial culture and engagement with America’s “culture wars”. Through the AI Act, the EU aims to encode European values, including dignity, privacy, transparency, and the precautionary principle, into AI systems. The EU’s proposed AI sovereignty would ensure that technology used in Europe reflects its interpretation of liberalism, with stronger safeguards against harm than US models impose.

China: AI as a civilisational duty

Beyond Europe, there is stronger resistance to American-built AI systems. Authoritarian governments are already making the case that their societies should not have to accept Western values embedded in AI systems. For example, the Chinese Communist Party insists AI must express “socialist core values” to preserve harmony, stability and national security. This is presented as necessary to protect China’s “5000-year-old civilization” from “digital colonisation” and “Western ideological trends”.

OpenAI

Companies differ in terms of the values they embed in AI. For example, OpenAI’s ChatGPT is cautious and rights‑oriented (Jonathan Kemper/Unsplash)

China’s AI governance is deeply integrated into its broader system of information control and national security law. Developers must register algorithms with authorities, pass security reviews, and filter politically sensitive content. For example, Chinese generative AI services reject user queries about Tiananmen Square, Taiwan’s independence, or Party leaders. Some models are programmed to steer conversations back to “positive” topics including economic growth, technological progress, or traditional culture. Domestically, this is framed as moral responsibility, ensuring AI reflects Chinese cultural and political values rather than alien Western norms. Internationally, Beijing argues every nation should develop AI aligned to its own values, rejecting the idea of universal values.

Russia: AI as civilisational defence

Russia likewise rejects liberal norms as universal, framing itself as a “state-civilisation” rooted in Orthodoxy, traditional values and centralised authority. Foreign AI systems are portrayed as potential vectors of Western ideological influence, threatening Russia’s cultural integrity and political stability.

Russian AI governance aligns closely with its “sovereign internet” doctrine: mandatory domestic data storage, algorithmic filtering to block “harmful” content, and integration with state surveillance systems. These rules form part of wider laws restricting free expression and curbing debate on sensitive issues such as LGBT rights, political freedoms and Covid‑19. Other measures erode privacy and online security, leaving no digital communication in Russia safe from state interference. By framing these controls as essential to defending Russian civilisation from ideological subversion, the Kremlin recasts digital authoritarianism as patriotic duty.

A diverging AI future

AI governance has moved beyond technical debates, and is increasingly a contest over which set of values will define the boundaries of speech and political imagination.

It is true that American values are not universal. Many nations reject the idea that American norms should be hardwired into AI systems. Their desire to prevent Americanisation and to see their own norms reflected in the technology is understandable. However, authoritarian governments often frame this desire as a civilisational necessity in order to justify censorship. By presenting AI as an extension of their civilisation, they can close off foreign influence, legitimise censorship, and present domestic information control as patriotic or morally essential. In China and Russia, this rhetoric entrenches surveillance, censorship and state-aligned AI development.

As AI governance becomes locked in this framing, we enter a world of fragmented digital spheres, each with their own boundaries of acceptable speech enforced by the politically powerful. A danger, then, is that the “clash of civilisations” framing becomes the architecture itself, turning political rhetoric into the hard‑coded reality of the global digital order.

关于《United States, China or Russia: Who writes the moral code for artificial intelligence?》的评论


暂无评论

发表评论

摘要

The values embedded in AI systems are becoming increasingly significant as generative AI platforms like ChatGPT and others shape knowledge and discourse globally. Companies vary in their approach to embedding values such as individual rights and free speech in AI, with notable differences between US and European frameworks. The EU aims to encode its own set of values into AI through legislation, while China and Russia promote the integration of state-centric values that align with their national security and cultural priorities. This divergence reflects broader geopolitical tensions, potentially leading to fragmented digital spheres defined by distinct sets of acceptable speech and political norms.

相关新闻