作者:by Henk van Ess • September 1, 2025
We’re approaching a point where the signal-to-noise ratio is getting close to one — meaning that as the pace of misinformation approaches that of factual information, it’s becoming nearly impossible to tell what’s real. This guide teaches journalists how to try to identify AI-generated content under deadline pressure, offering seven advanced detection categories that every reporter needs to master.
As someone who helps newsrooms fight misinformation, here’s what keeps me up at night: Traditional fact-checking takes hours or days. AI misinformation generation takes minutes.
Video misinformation is so old that it predates modern AI technology by decades. Even despite the basic technical limitations of early recording equipment, it could create devastating false impressions. In 2003, nanny Claudia Muro spent 29 months in jail because a low-frame-rate security camera made gentle motions look violent — and nobody thought to verify the footage. In January 2025, UK teacher Cheryl Bennett was driven into hiding after a deepfake video falsely showed her making racist remarks.
AI-generated image purporting to show Pope Francis I wearing a Balenciaga puffer jacket. Image: Midjournery, Pablo Xavier
This viral image of Pope Francis I in a white Balenciaga puffer coat fooled millions on social media before being revealed as AI-generated using Midjourney’s text-to-image prompt. Key detection clues included the crucifix hanging at his chest held inexplicably aloft, with only a white puffer jacket where the other half of the chain should be. The image’s creator, Pablo Xavier, told BuzzFeed News: “I just thought it was funny to see the Pope in a funny jacket.”
Sometimes the most effective fakes require no AI at all. In May 2019, a video of House Speaker Nancy Pelosi was slowed to 75% speed and pitch-altered to make her appear intoxicated. In November 2018, the White House shared a sped-up version of CNN correspondent Jim Acosta’s interaction with a White House intern, making his arm movement appear more aggressive than in reality.
I recently created an entire fake political scandal — complete with news anchors, outraged citizens, protest footage, and the fictional mayor himself — in just 28 minutes during my lunch break. Total cost? Eight dollars. Twenty-eight minutes. One completely fabricated political crisis that could fool busy editors under deadline pressure.
Not long ago, I watched a seasoned fact-checker confidently declare an AI-generated image “authentic” because it showed a perfect five-finger hand instead of six. But now, that solution is almost gone.
This is the brutal reality of AI detection: the methods that made us feel secure are evaporating before our eyes. In the early development of AI image generators, poorly drawn hands — like extra fingers or fused digits — were common and often used to spot AI-generated images. Viral fakes, such as the “Trump arrest” images from 2023, were partly exposed by these obvious hand errors. However, by 2025, major AI models like Midjourney and DALL-E have significantly improved at rendering anatomically correct hands. As a result, hands are no longer a reliable way to detect AI-created images, and those seeking to identify AI art must look for other, subtler signs to spot AI-generated content.
The text rendering revolution happened even faster. Where AI protest signs once displayed garbled messages like “STTPO THE MADNESSS” and “FREEE PALESTIME,” some of the current models produce flawless typography. OpenAI specifically trained DALL-E 3 on text accuracy, while Midjourney V6 added “accurate text” as a marketable feature. What was once a reliable detection method now rarely works.
The misaligned ears, unnaturally asymmetrical eyes, and painted-on teeth that once distinguished AI faces are becoming rare. Portrait images generated in January 2023 showed detectable failures all the time. The same prompts today produce believable faces.
This represents a fundamental danger for newsrooms. A journalist trained on 2023 detection methods might develop false confidence, declaring obvious AI content as authentic simply because it passes outdated tests. This misplaced certainty is more dangerous than honest uncertainty.
Analysis of an AI-generated image purporting to show a girl being rescued after flooding in the US. Image: Henk van Ess
I began wondering if I could build a verification assistant for AI content as a bonus for this article. I started to email experts. Scientists took me deep into physics territory I never expected: Fourier transforms, quantum mechanics of neural networks, mathematical signatures invisible to the human eye. One physicist explained how AI artifacts aren’t just visual glitches — they’re frequency domain fingerprints.
But then came the reality check: “Don’t build a tool yourself,” one expert warned. “You’ll need massive computing power and Ph.D.-level teams. Without that infrastructure, you’ll fail miserably.”
That’s when it hit me. Why not fight AI with AI, but differently? Instead of recreating billion-dollar detection systems, I’d harness existing AI infrastructure to do the heavy lifting.
Analysis of an AI-generated image purporting to show the Belarusian president holding a cone of fries. Image: Henk van Ess
Image Whisperer (initially named Detectai.live) was born from this insight. The tool runs parallel large language model analysis alongside Google Vision processing, applying the physics principles these experts taught me while leveraging the computational power already available. Most importantly, unlike most AI tools, it tells you when it doesn’t know something instead of guessing.
It’s not trying to be the best system out there — it’s trying to be the most honest.
The arms race between AI creators and detectors continues, with creators currently benefiting from the advantage of speed. Identifying what is or isn’t a deepfake is becoming a game of cat and mouse with developers improving the technology. Successfully identifying deepfakes requires combining multiple detection methods, maintaining constant vigilance, and accepting that perfect detection may be impossible. For journalists looking for precise answers, the goal has shifted from definitive identification to a probability assessment and informed editorial judgment.
But journalism has always adapted to changing technology. We learned to verify sources when anyone had the tools to create a website. We developed social media verification protocols when everyone became a potential reporter. Now we must develop standards for an era when anyone can create convincing audiovisual evidence.
30-Second Red Flag Check (Breaking News): When time is critical and you need an instant assessment of suspicious perfection, focus on the gut feeling that something looks “too good to be true.” Look for magazine-quality aesthetics in contexts where that level of grooming would be impossible or inappropriate. A protest leader with flawless makeup, a disaster victim with perfect hair, or a candid political moment where everyone looks professionally styled should trigger immediate suspicion.
Five-Minute Technical Verification (Standard Stories): This deeper examination focuses on the technical details that betray artificial generation. Modern AI creates anatomically correct images, but they often exhibit an uncanny perfection not found in real photography. Real faces have subtle asymmetries, natural wear patterns, and environmental effects that AI struggles to authentically replicate.
Deep Investigation (High-Stakes Reporting): For stories where accuracy is paramount, this comprehensive analysis treats the image as evidence requiring forensic scrutiny. The goal is building a probability assessment based on multiple verification points, and understanding that while definitive proof may be impossible, informed judgment is achievable.
AI-generated image of train tracks receding into the distance next to correct perspective image from real life. Images: Henk van Ess
The Narrative: AI assembles images like a collage artist, not a photographer. It understands visual elements but not the geometric and physical rules that govern how light, perspective, and shadows actually work in the real world. These fundamental physics failures are harder for AI to fix because they require an understanding of 3D space and light behavior.
Real-World Physics Problems in AI Images: Although we are still early in the age of generative AI, today’s AI-generated images perspectively correct shadows and reflections. A typical example generated using OpenAI’s DALL-E 2 shows shadows that are inconsistent, reflections that are impossibly mismatched or missing, and shadows in the reflection oriented in exactly the wrong direction.
Vanishing Point Analysis: Real buildings follow the laws of perspective — parallel lines converge toward a single point on the horizon. AI often creates buildings where rooflines point left while window lines point right, a physical impossibility that reveals algorithmic assembly rather than photographic capture. Vanishing points are fundamental to capturing the essence of perspective in real images, and generated images often exhibit inconsistencies where lines do not meet at the correct vanishing point.
Shadow Consistency Check: Where there is light, there are shadows. The relationship between an object, its shadow, and the illuminating light source(s) is geometrically simple, and yet it’s deceptively difficult to get just right in a manipulated or synthesized image. In single-light source scenes (like sunlight), all shadows must point away from that source. AI frequently shows people casting shadows in multiple directions despite one sun, violating the basic laws of physics.
Research Validation: Academic research has confirmed these geometric flaws. Studies using GradCam analysis on outdoor images reveal varied shadow directions of vehicles, and structural distortions near vanishing points, while indoor scenes show object-shadow mismatches and misaligned lines in room geometry.
This type of subtle detection is not for the faint-hearted. Get used to the idea of staring at lines first.
30-Second Red Flag Check:
Now you have the visual template for what a correct perspective looks like.
Five-Minute Technical Verification (Standard Stories):
Perspective Test:
Shadow Analysis:
Deep Investigation (High-Stakes Reporting):
Reflection Verification: When objects are reflected on a planar surface, lines connecting a point on the object to the corresponding point in the reflection should converge to a single vanishing point.
A viral, digitally altered photo of a US soldier from 2004. The “DOING THE WORK OF” patch plus the Russian, German, and French flags have been added to the image. The tool shows you the possible area of alteration and how likely it is altered. Images: Henk van Ess
The Narrative: When AI creates an image, it leaves behind hidden clues in the file — a mathematical signature that are like invisible fingerprints that special tools can detect. These clues are found in the way pixels are arranged and how the file is compressed. Think of it like DNA evidence that proves something was made by AI rather than captured by a real camera.
Noise Pattern Detection: Real cameras capture images with natural, messy imperfections — like tiny random specks from the camera sensor. AI-generated images have unnaturally perfect patterns instead. When experts analyze these patterns with special software, they see distinctive star-like shapes that would never appear in a real photo. It’s like the difference between truly random static on an old TV versus a computer trying to fake that randomness — the fake version has a hidden order to it that gives it away, if you have the proper tools.
Copy-Paste Detection: When AI or humans duplicate image regions, it creates unusual pixel correlations. Different areas become suspiciously similar beyond natural spatial redundancy, creating detectable patterns or mathematical signatures.
Compression Artifact Analysis: AI-generated content often shows unnatural compression patterns that differ from camera-originated raw files, revealing algorithmic rather than optical origins.
Professional Detection Tools: TrueMedia.org’s technology has the ability to analyze suspicious media and identify deepfakes across audio, images, and videos. Examples of recent deepfakes flagged by TrueMedia.org include an alleged Donald Trump arrest photo and an alleged photo of President Biden with top military personnel.
30-Second Red Flag Check:
Before analyzing suspect images, practice on something foolproof:
Five-Minute Technical Verification (Standard Stories):
Each step catches different types of AI mistakes—think of it like using multiple different tests to be sure of your conclusion.
Deep Investigation (High-Stakes Reporting):
Forensically — This is a set of free, comprehensive noise analysis tools with frequency domain visualization.
Frequency domain analysis — Technical detection of mathematical patterns unique to AI.
Analysis of AI-generated audio deepfake of Donald Trump. Image: YouTube, via Henk van Ess
The Narrative: Voice cloning technology can replicate anyone’s voice from seconds of audio, but it leaves detectable traces of artificial generation in speech patterns, emotional authenticity, and acoustic characteristics. While achieving impressive accuracy, synthetic voices still struggle with replicating the subtle human elements that make speech truly authentic.
Real-World Audio Deception Cases: In March 2019, the CEO of a UK energy firm received a call from his “boss” with a perfect German accent, requesting a big money transfer. Only a suspicious second call from an Austrian number revealed the AI deception. More recently, political consultant Steven Kramer paid $150 to create a deepfake robocall impersonating US President Joe Biden, which urged people not to vote in New Hampshire’s 2024 Democratic primary.
Speed and Cost of Audio Fakes: According to the lawsuit against Kramer, the deepfake took less than 20 minutes to create and cost only $1. Kramer told CBS News that he received “$5 million worth of exposure” for his efforts.
Speech Pattern Red Flags: Lindsay Gorman, who studies emerging technologies and disinformation, told NBC News that there often are tells in deepfakes: “The cadence, particularly towards the end, seemed unnatural, robotic. That’s one of the tipoffs for a potentially faked piece of audio content.”
Linguistic Logic Failures: An earlier deepfake case revealed AI saying “pounds 35,000” — putting the currency type before numbers in an unnatural way that exposed the synthetic generation.
30-Second Red Flag Check:
Have a look at Hiya Deepfake Voice Detector, a simple Chrome plugin (you can use it 20 times a month). It passed the Trump-Biden video test:
It’s a Chrome extension that analyzes audio in real-time to determine if what you’re hearing is a real human voice or something cooked up by AI.
Here’s what it actually does:
Caveat: Since the plugin uses probabilistic algorithms, it won’t be 100% accurate in every case.
Five-Minute Technical Verification (Standard Stories):
Deep Investigation (High-Stakes Reporting):
AI-generated image purporting to show a news broadcast still about a Paris climate protest. Image: Henk van Ess
The Narrative: AI generates content based on visual patterns without understanding real-world context, temporal logic, or situational appropriateness. This creates content that looks convincing in isolation but falls apart under sensible scrutiny.
The Iran Prison Video Deception: A sophisticated AI-generated video claimed to show an Israeli missile strike on Iran’s Evin Prison, but was generated from a 2023 photograph. Key detection clues include seasonal mismatches (leafless shrubs in supposed summer footage), perfect detail matching that violated probability, and impossible timing.
30-Second Red Flag Check: AI creates visually convincing content, but often misses fundamental logical relationships between timing, place, and circumstance. In breaking news scenarios, trust your knowledge of the world to spot impossibilities that would take sophisticated analysis to verify later.
Five-Minute Technical Verification (Standard Stories): This deeper analysis leverages your research skills to cross-reference claims against verifiable facts. AI struggles with the interconnected nature of real-world events, creating content that passes visual inspection but fails logical scrutiny when compared to external data sources.
Deep Investigation (High-Stakes Reporting): For critical stories, treat contextual clues as pieces in a forensic puzzle, each requiring systematic verification against established facts. This comprehensive approach builds a probability matrix based on multiple logical inconsistencies rather than single definitive proof points.
AI-generatred image purporting to show protesters marching in a city street. Image: Henk van Ess
The Narrative: AI can replicate human appearance but struggles with authentic human behavior, social dynamics, and natural interaction patterns. This creates detectable inconsistencies in crowd scenes, group dynamics, and individual behavior that trained observers can spot.
30-Second Red Flag Check (Breaking News): AI creates crowds that look realistic at first glance but betray their inauthenticity through unnatural behavior patterns. In breaking news situations, focus on whether people are acting like real humans would under the given circumstances, not digital actors following programmed behaviors.
Five-Minute Technical Verification (Standard Stories): This analysis leverages your understanding of human social dynamics to identify AI’s shortcomings in replicating authentic group behavior. Real crowds exhibit complex social patterns that AI training data cannot fully capture, creating detectable artificial uniformity in supposedly spontaneous gatherings.
Deep Investigation (High-Stakes Reporting): For critical stories, treat human behavior as anthropological evidence requiring a systematic analysis of social patterns. This comprehensive approach examines whether the complex web of human interactions could authentically occur in the alleged circumstances.
AI-generated image purporting to show the Belarusian president incongruously holding a cone of fries at an official meeting. Image: Henk van Ess
The Narrative: Our brains evolved pattern recognition over millions of years. AI’s patterns come from training data and algorithmic processes. When something violates natural expectations built into human perception, that gut feeling often represents the fastest and most reliable initial detector before technical analysis.
Real-World Success Stories: In 2019, social media users immediately flagged a viral “street shark” image during Hurricane Florence. While technically competent, viewers felt it seemed wrong for the situation. Their instincts proved correct — reverse searches revealed digital insertion. Similarly, experienced journalists can sense when amateur footage looks suspiciously cinematic or when perfect documentation exists for supposedly spontaneous events.
Fun fact: it seems that alleged encounters were happening for more than a decade in a lot of hurricanes, but there was one example that was verified.
30-Second Red Flag Check: Trust your evolutionary pattern recognition when time is critical. Look for the production quality paradox where amateur sources produce Hollywood-level content, or timing convenience where chaotic events are perfectly documented. Your ancient detection system often spots these violations before technical analysis can confirm them.
Five-Minute Technical Verification (Standard Stories): Transform intuitive feelings into systematic verification by examining specific elements that triggered your pattern recognition abilities. When your gut says something feels off, identify what specifically violates natural expectations to build a logical case.
Deep Investigation (High-Stakes Reporting): For critical stories, treat intuitive detection as the starting point for comprehensive verification. Your pattern recognition identified anomalies; now systematically examine each element that triggered suspicion to build an evidence-based assessment.
When to Trust Your Gut:
Bottom line: These seven categories of AI detection and the new tool — anatomical failures, physics violations, technical fingerprints, voice artifacts, contextual logic, behavioral patterns, and intuitive recognition — give journalists a comprehensive toolkit to assess content authenticity under deadline pressure. Combined with professional detection tools and updated editorial standards, we can maintain credibility. Fight fire with fire. Use AI to detect AI. And help preserve what’s left of our shared reality.
Dutch-born Henk van Ess is cutting through AI to find stories in data. He applies that in investigative research and builds tools for public use like SearchWhisperer and AI Researcher. The trainer in worldwide newsrooms, including the Washington Post, Axel Springer, BBC, and DPG, runs Digital Digging, where open source intelligence meets AI. He serves as an assessor for Poynter’s International Fact-Checking Network (IFCN) and the European Fact-Checking Standards Network (EFCSN).