作者:Online: wendypatrickphd.com, Facebook, X, LinkedIn, Instagram
Contemporary living in a fast-paced modern world requires the need for speed. Whether researching a project for work or a restaurant for brunch, we increasingly rely on artificial intelligence (AI) to find answers faster. But how do we know it will find the “right” answers, especially when we hear that in some cases, it will make things up? Research reveals how we can enhance our ability to distinguish AI-generated facts from fiction.
John Boyer and Wanda Boyer (2024) studied the impact of AI “hallucinations” on professional expectations of generative AI in a variety of areas.[i] They define a hallucination as the dangerous tendency in some cases for AI to generate incorrect answers that look authentic and authoritative, which may mislead users. They recognize the emergence of techniques such as retrieval augmented generation that can reduce AI hallucinations, but only at the lowest level of cognitive retrieval of information.
Søren Dinesen Østergaard and Kristoffer Laigaard Nielbo (2023)[ii] view the concept of hallucinations a bit differently, perceiving the label as imprecise and stigmatizing. They point out that AI responses that are not justified by training data do not fit the definition of a hallucination, which as a medical term, refers to a sensory perception that is experienced without an external stimulus. They note that not only do AI models lack sensory perceptions, the errors they make are based on inputted data—which is a form of external stimuli. And regarding stigma, Østergaard and Nielbo note that in the medical field, hallucinations are associated with mental illness, such as schizophrenia, where it serves as a hallmark symptom.
Regarding its use in different situations, AI hallucinations may impact evaluation of AI service in areas that are considered to be high-risk such as crisis self-rescue and may even have life-threatening consequences.[iii] But for users who are just seeking day-to-day informational answers, whether personally or professionally, there are ways to spot false information faster.
Effective use of AI requires knowing how the technology works. There is a difference between searching for concrete facts to cite as authority and using an AI platform to brainstorm. Asking AI to suggest out-of-the box vacation ideas or suggestions for an impressive first-date do not generate factual misinformation. It is safer to surf for ideas rather than authority. But if you need to take a deeper dive, forewarning may protect you from false facts.
Yoori Hwang and Se-Hoon Jeong (2025) studied the impact of forewarning on unintentional acceptance of AI hallucinations, which they define as statements that seem to be plausible but are factually incorrect.[iv] They found that forewarning about AI hallucinations could reduce acceptance of AI-generated misinformation when there was a high preference for effortful thinking.
Many users would rather not add an extra level of diligence to information searching because the entire reason they are using AI is to save time. But employing situational awareness when reviewing AI-generated responses ensures attention to detail that will ultimately save both time and embarrassment as people acclimate to the ways in which different AI platforms respond to queries.
Similar to the way we filter information from live sources with whom we are unfamiliar, employing a trust-but-verify mentality online will facilitate information gathering in a fashion that is more likely to be quick but accurate. If you choose to take the plunge into this new research technology, surf responsibly.
References
[i] Boyer, John, and Wanda Boyer. “A Strategic Roadmap for Mitigating Generative Artificial Intelligence Hallucinations.” Cureus Journal of Computer Science 1, no. 1 (2024).
[ii] Østergaard, Søren Dinesen, and Kristoffer Laigaard Nielbo. 2023. “False Responses from Artificial Intelligence Models Are Not Hallucinations.” Schizophrenia Bulletin 49 (5): 1105–7. doi:10.1093/schbul/sbad068.
[iii] Song, Yan, Mingjia Cui, Fei Wan, Zhengwang Yu, and Jingui Jiang. 2025. “Ai Hallucination in Crisis Self-Rescue Scenarios: The Impact on Ai Service Evaluation and the Mitigating Effect of Human Expert Advice.” International Journal of Human-Computer Interaction, April.
[iv] Hwang, Yoori, and Se-Hoon Jeong. 2025. “Generative Artificial Intelligence and Misinformation Acceptance: An Experimental Test of the Effect of Forewarning about Artificial Intelligence Hallucination.” Cyberpsychology, Behavior, and Social Networking, February.