英语轻松读发新版了,欢迎下载、更新

Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia

2025-04-18 17:34:00 英文原文

作者:dhojnacki

Over the last three years of its war in Ukraine, Russia has expanded, developed, and tailored an influence campaign targeting much of the world, spreading its content in Wikipedia articles and in popular artificial intelligence (AI) tools. As election campaigns in Romania and Moldova took place, or as political discussions between US President Donald Trump and Ukrainian President Volodymyr Zelenskyy unfolded, a network of inauthentic pro-Russian portals ramped up its activity, laundering content from sanctioned news outlets and aligning global information sources with the Kremlin narrative machine.

A Russian network gone global

The Pravda network is a collection of fraudulent news portals targeting more than eighty countries and regions throughout the world, launched by Russia in 2014. In 2024, the French disinformation watchdog Viginum reported on the operation, identifying the malicious activity of a Crimea-based IT business, findings that the Atlantic Council’s Digital Forensic Research Lab (DFRLab) later confirmed, which showed direct Russian involvement with the network.

The Pravda network acts as an information laundromat, amplifying and saturating the news cycle with tropes emanating from Russian news outlets and Kremlin-aligned Telegram channels. During the 2024 “super-election year,” the network created websites specifically targeting NATO, as well as Trump, French President Emmanuel Macron, and other world leaders and politicians.

AI and Wikipedia at the forefront

This strategy, in a likely attempt to evade global sanctions on Russian news outlets, is now poisoning AI tools and Wikipedia. By posing as authoritative sources on Wikipedia and reliable news outlets cited by popular large language models (LLMs), Russian tropes are rewriting the story of Russia’s war in Ukraine. The direct consequence is the exposure of Western audiences to content containing pro-Kremlin, anti-Ukrainian, and anti-Western messaging when using AI chatbots that rely on LLMs trained on material such as Wikipedia.

What’s next?

These are some of the findings from an investigation by the DFRLab, in partnership with Finnish company CheckFirst. Our research unveiled a long-running Russian online influence operation that has taken root across the global internet. Together, we released a deep-dive series of investigations. To assess, map, and expose the influence of the Pravda network, we have released an interactive dashboard and map, with free-to-use datasets unveiling how countries are being targeted.

As AI chatbots continue to advance, Russia is infecting them with Kremlin-manipulated content tailored to influence the global internet, distorting the public’s understanding of facts and ability to make well-informed decisions. This operation opens the door to questions regarding the transparency of the training of AI models and the moderation of content emanating from known Russian-manipulated sources that have persistently divided the West on its support for Ukraine.


Valentin Châtelet is a research associate for security at the Atlantic Council’s Digital Forensic Research Lab, based in Belgium.


Explore the DFRLab’s full series on the Pravda network

Further reading

Image: An user opening Wikipedia in L'Aquila, Italy, on January 14, 2021. Wikipedia free encyclopedia turns 20 years on January 15. Photo by Lorenzo Di Cola/NurPhoto.

关于《Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia》的评论


暂无评论

发表评论

摘要

Russia has expanded a misinformation campaign targeting over eighty countries through the Pravda network since 2014. This network amplifies Kremlin narratives and spreads content via fake news portals, Wikipedia edits, and AI tools to evade sanctions and influence global perceptions of Russia's war in Ukraine. During election years like 2024, the activity intensifies, creating websites to target key political figures and organizations such as NATO. The infiltration of AI systems and Wikipedia distorts information available to Western audiences, raising concerns about the transparency and moderation of AI training data.

相关新闻