作者:Politics
Chris Kremidas-Courtney is a senior visiting fellow at the European Policy Centre, associate fellow at the Geneva Centre for Security Policy, and author of 'The Rest of Your Life: Five Stories of Your Future.'
Since June 2024, the Kremlin-driven Operation Overload has become Europe’s most sustained disinformation blitz against the fact-checking community. According to a newly released report by CheckFirst and Reset Tech, Operation Overload’s AI-driven narrative variants are popping up faster than fact-checkers can knock them down, turning Europe’s information space into a perpetual game of disinformation whack-a-mole.
If Brussels doesn’t harness the full force of the Digital Services Act now, demanding real-time platform accountability and enabling cross-border threat-sharing, the next wave of state-sponsored fakes could reshape our public discourse before we even spot the first lie. The question is, can Brussels and the platforms it seeks to regulate keep pace with a campaign that adapts as quickly as the neural networks powering it?
Since September 2024, the Russian-backed operation (also known as Matryoshka) has more than doubled its email attacks, overwhelming media and fact-checking communities with an average of 2.6 fabricated pitches per day – more than doubling since the campaign began. In the same way that waves of drones or missiles can overwhelm air defences, this operation seeks to do the same to journalists and fact-checkers. Fake emails are only the tip of the iceberg for this coordinated propaganda machine. According to the Overload report this includes 11,000 crypto-themed ‘reposter’ bots on Twitter/X and thousands of deepfake videos.
AI-driven content creation has become the operation’s backbone. Deepfake audio, AI-generated images and “magazine cover” forgeries now churn out at scale, each twisted around a “kernel of truth”. To evade detection they include isolated reports, such as Ukrainian call-centre irregularities, or decontextualise verifiable details of events.
This perpetual “whack-a-mole” exhausts journalistic resources and fragments the fact-checking ecosystem. For example, CheckFirst logged 180 debunks, yet fewer than half were framed within the larger Operation Overload context.
But all these numbers still undersell the operation’s enormity. In February 2025, the American Sunlight Project found that Kremlin-aligned networks were already producing over three million AI-forged articles per year – a tsunami of disinformation now poisoning AIs like ChatGPT and Gemini, eating away at our digital discourse from within.
For example, across France, Poland and Moldova, Overload adapted its four pillars of anti-Ukrainian vitriol, election scares, identity smears and calls to violence to local flashpoints (e.g. Macron, historical grievances, Sandu’s legitimacy). Such targeted campaigns require equally tailored counter-messaging since one-size-fits-all rebuttals leave gaps for the next hostile narrative.
Most revealing is who amplifies these lies. High-profile Kremlin-aligned ‘amplifier’ accounts on Twitter/X whose synchronized behaviour lends the campaign mainstream reach, grants Operation Overload an aura of mainstream credibility. While the direct link between these influencers and Russian state agencies remains opaque, their synchronised behaviour and consistent prioritisation by platform algorithms indicate an operation that transcends mere grassroots trolling.
Under the EU’s Digital Services Act, Very Large Online Platforms (VLOPs) must swiftly mitigate systemic risks such as election interference and incitement to violence. Yet over 70% of flagged content lingered online for months, and platforms missed reactivated accounts and paid-for authentication abuse.
If the EU allows this to persist by eschewing public audits, fines or mandated transparency, the DSA risks becoming little more than window dressing, ill-suited to protect against state-sponsored disinformation.
Europe can’t treat each Overload hit as an oddity. Instead, it must tackle AI-enabled disinformation with four coordinated efforts:
Real-time, multi-platform threat sharing: Set up a shared dashboard with encrypted feeds so that the moment one fact-checking group or platform spots a new fake image, bot network or edited video, it automatically alerts everyone else so they can all block it before it spreads.
Scalable AI-detection investment: Invest in AI systems that can automatically scan millions of videos, images and posts every hour, flagging deepfakes and bulk-generated disinformation so platforms and fact-checkers can remove them before they go viral.
Give the DSA teeth: Publicly name and sanction non-compliant VLOPs, demand rapid takedowns under Articles 34–35, and require quarterly transparency reports on coordinated inauthentic behaviour.
Narrative literacy campaigns: Launch public-awareness campaigns that go beyond debunking individual lies to teaching people how to spot when a misleading story is built around a ‘kernel of truth’ or artificially bulk-produced, so everyone can challenge and report fakes, not just fact-checkers.”
Operation Overload is an AI-fuelled, multi-vector threat crafted by Kremlin-aligned actors.
The Overload 2 report maps this danger. It’s now up to national capitals to forge a robust cognitive defence for Europe’s democracy.