作者:Arturo Loaiza-Bonilla, MD, MSEd
In February 2013, the medical community was rocked by a striking demonstration of inattentional blindness: Even highly trained radiologists missed the outline of a gorilla digitally inserted into lung scans. It wasn’t a trick question or a hoax, but an experiment.
In this study, 24 radiologists were asked to perform a standard lung-nodule detection task using CT scans. Unbeknownst to them, researchers superimposed a large gorilla image 48 times bigger than the typical nodule into the final set of images. Remarkably, 83% of the radiologists missed this glaringly obvious gorilla, even though eye-tracking revealed they had looked directly at its location. The takeaway is that, despite years of training, even expert observers are vulnerable to inattentional blindness; when intensely focused on looking for small nodules, their brains filtered out a highly unexpected (and very large) anomaly. This bizarre finding underscored the limits of human perception under cognitive load — the mental effort needed to process, remember, and respond to information when performing a task.
Fast-forward to the present, where artificial intelligence (AI) is now heralded as the next big disruptor. The question is clear: Are we still missing the “gorillas” hidden in plain sight — or will AI help us finally see what has been right before our eyes?
Although AI’s tumor-detection capabilities have attracted widespread attention, its role in radiology extends far beyond. Cutting-edge tools can analyze CT scans for strokes, flag fractures in trauma imaging, and quantify subtle changes in oncologic lesions — all in near-real time. In oncology, AI can manage massive volumes of image data, pinpointing nuances in tumor size or density and amplifying what radiologists can see.
Furthermore, large-scale trials are now evaluating AI’s impact in real-world clinical settings. The recently published MASAI randomized controlled trial in The Lancet Digital Health explores how AI-powered mammography screening might improve early detection rates, potentially reducing the burden of interval cancers and false negatives.
To determine whether AI-driven tools can match or even outperform human readers in identifying suspicious lesions, the EDITH UK project has launched a monumental trial of AI in breast cancer screening. The results could ultimately streamline diagnosis and cut wait times for patients.
These high-profile studies indicate that AI is no longer a theoretical concept, but a cornerstone of modern diagnostics. However, carefully integrating these innovations is critical to ensuring they deliver on their promise of safer, faster, and more accurate patient care.
Of course, the potential benefits of AI are not without pitfalls. If the data used to train AI models primarily consists of urban hospital patients, the algorithms may underperform for rural or marginalized populations. Likewise, data privacy regulations require vigilant oversight, especially as systems become cloud-based. As highlighted by Dr Eric Topol and others, it’s imperative that we maintain robust encryption, ensure strict access controls, and adhere to evolving privacy regulations so that AI remains a force for equitable healthcare — not a vector to create new disparities. I am also wary about automation bias, where we become 100% reliant on algorithms for decision-making.
Then there’s the importance of the human element. I have frequently discussed Moravec’s paradox: the phenomenon in which tasks we assume require sophisticated cognition (such as reading detailed imaging) may be mastered by AI, while subtler human abilities, such as empathy and contextual reasoning, remain elusive to machines. AI might excel at identifying minute lesions on a CT scan, but the overall clinical picture — patient history, lifestyle factors, or psychosocial elements — requires human expertise.
This is where cognitive agility comes in: the ability to rapidly pivot between data interpretation, empathic communication, and advanced clinical judgment. Radiologists understand that a flagged abnormality must be considered in the context of the patient’s overall health status, which an algorithm alone — no matter how advanced — cannot replicate.
Although this discussion is framed as “radiologists vs AI,” the actual future is radiologists working with AI. AI can quickly highlight high-risk findings, gauge tumor burden, or generate preliminary interpretive reports, while radiologists evaluate these insights in the context of each patient’s individual narrative. This collaborative workflow has the potential to improve productivity, reduce mental fatigue, and increase accuracy, all of which can lead to improved patient care. The benefits of this collaborative workflow are already evident in global research. Trials, such as MASAI and EDITH UK, shine a spotlight on AI’s potential to detect breast cancers earlier, which is a boon for patients anxiously awaiting test results. Rather than replacing clinicians, these tools serve as a powerful adjunct— helping us catch the “gorillas” we might otherwise overlook.
As more mechanical tasks are handled by AI, radiologists of the future will need to learn new skills. AI literacy involves understanding how algorithms work, identifying their limitations, and rapidly integrating evolving technologies into daily practice. This ability requires not just technical knowledge but also the critical thinking necessary to override an AI’s results when clinical judgment suggests a mismatch, ensuring that the physician remains the ultimate decision-maker.
Engaging in these four elements can help get us there:
Seek out AI training. Explore workshops, online certifications, or institutional programs that offer hands-on experience with AI tools.
Champion AI literacy. Advocate for residency and fellowship programs that incorporate ethics, data science, and critical appraisal of AI to prepare new radiologists for a collaborative, tech-driven future.
Demand equity and security. Engage with data scientists and regulatory bodies to minimize algorithmic bias and safeguard patient data, guaranteeing AI benefits everyone equally.
Stay vigilant. Recognize AI’s strengths while respecting human intuition. When making clinical decisions, use machine output as a starting point, not the ultimate endpoint.
From the notorious “gorilla experiment” that rattled radiology a decade ago to today’s groundbreaking trials such as MASAI and EDITH UK, our field has evolved dramatically in how we see — and occasionally fail to see — the hidden details in imaging. By blending AI’s computational power with radiologists’ nuanced and empathetic expertise, we can advance a future that’s both technologically robust and deeply patient-centered.
Yes, there are challenges, ranging from data security to biases in training datasets, but by tackling these head-on, we can harness AI’s full potential. When deployed responsibly, AI won’t just help us catch elusive “gorillas;; it could transform how we deliver care, one scan at a time.
How cool is that?
I’d love to hear your thoughts on the evolving partnership between AI and radiology. Feel free to reach out at Arturo.AI.MedTech@gmail.com. Let’s keep the conversation going — and ensure we never overlook what’s right before our eyes again.
Arturo Loaiza-Bonilla, MD, MSEd, is the co-founder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as systemwide chief of hematology and oncology at St. Luke’s University Health Network, where he maintains a connection to patient care by attending to patients 2 days a week.