“I think that’s them.”
Those four words, voiced in a police lineup, have the potential to change a person’s life forever. But, what if they’re wrong? What if memory, often viewed as inherently reliable, is biased?
In my psychology and law class, I learned just how large of a role memory plays in the legal system and how inconsistent it can be. In fact, mistaken eyewitness identifications are one of the most prevalent causes of false convictions, currently accounting for 75% of DNA exonerated cases. This statistic, however, is more than just a data point; it’s a grave reminder of the individuals who are spending years in prison for crimes they did not commit.
Take the case of Scott Minton, exonerated in January of this year after spending thirty years behind bars for a crime he had no involvement in. Despite no physical evidence linking him to the scene, eyewitness testimony sealed his fate. Minton’s story echoes countless others, from the well-known case of Ronald Cotton in 1985 to many that never make the headlines.
Story continues below advertisement
Through her research, Lauren Kelso from the Dodson Lab at the University of Virginia found that “people are only somewhat better than chance at making these discrimination decisions,” emphasizing just how critical it is to find a solution.
This had me asking the question: How can we do better? In an age where artificial intelligence (AI) is transforming the technological world, can it also help improve the justice system? Recent empirical evidence suggests that AI may be able to reduce the biases within eyewitness identification. If used correctly, it could revolutionize the legal system, preventing wrongful convictions and restoring efficacy in one of the most critical elements of the legal process. If implemented incorrectly, however, AI has the potential to introduce greater biases into the eyewitness process.
One bias that Kelso and colleagues (2024) refer to as the featural justification effect (FJE) occurs when people perceive eyewitness identifications based on observable features (e.g., “I remember his eyes”) as less accurate than those based on overall recognition (e.g., “I remember him”). In the study, participants viewed eyewitness lineups and rated how accurate the identification likely was, and were provided with AI-generated accuracy predictions under certain conditions. The results were telling. AI assistance effectively eliminated the FJE, but only for participants who found the AI useful and trusted the technology. Those who rated AI as less useful continued to show the bias, suggesting that while AI can help reduce cognitive biases in eyewitness identifications, it’s only effective if users are willing to rely on it. One way in which Lauren Kelso speculates that this potential drawback could be moderated is by increasing overall user engagement and familiarity with AI technology, diminishing any negative preconceived notions about AI.
This question of AI reliability has further been explored by Kleider-Offutt and colleagues (2024), who tested whether facial recognition softwares (FRS) could outperform human participants in suspect identifications. The participants viewed six crime videos featuring perpetrators of varying races and attempted to identify the suspects under either high or low quality visual conditions. At the same time, Google’s facial recognition algorithm, FaceNet, assessed the facial similarity of the perpetrator in the videos to the same lineups provided to the human participants, with high-similarity scores corresponding to high accuracy. The results showed that Google’s FRS technology outperformed human participants, correctly identifying perpetrators more often and making fewer incorrect identifications, even with poorer visual quality. This suggests that AI could serve as a valuable tool in the courtroom, functioning more effectively at reducing mistakes in facial recognition. Additionally, due to the fact that artificial models often outperform human judgment, AI-informed training could improve human evaluations of eyewitness identification. This approach ensures that human decision-makers remain central to the process while simultaneously benefiting from AI’s analytical strengths.
Despite its promises, however, AI is still an emerging field and is far from perfect, especially in the case of racial biases. Facial recognition softwares in particular, have been shown to produce significantly higher false positives for African Americans and East Asian populations than for White individuals. This bias stems largely from the fact that the databases used in training these artificial models lack diversity. As Livingston (2020) points out, AI models are often created without the insight of those with culturally diverse backgrounds, using databases that are overrepresented by White and male faces. This excludes a wide range of demographics, causing the technology to be worse at identifying matches for Black and female faces.
This being said, progress is being made. Researchers are calling attention to the importance of designing AI with inclusive datasets and multicultural development teams, aiming to reduce these racial disparities within the facial identification process. Incorporating diversity both in the creation as well as in the use of AI is key to ensuring that the technology serves justice rather than reinforcing inequalities.
Ultimately, AI should not be a replacement for human judgement and the ethical responsibility that the justice system demands, but rather a tool to enhance it. Lauren Kelso describes this process as maintaining “complementarity,” where the human AI team is performing better than either humans or the AI model alone. This technology has the ability to improve the accuracy of eyewitness identifications and reduce wrongful convictions by identifying and correcting critical blindspots in human eyewitness identifications. In a world where technological advances are occurring rapidly, it is crucial that we remain thoughtful about how and when AI is being used within the legal system.
As I sat in class, startled by the number of wrongful convictions caused by fallible eyewitness accounts, I never imagined that artificial intelligence might be an advantageous solution. But, stories like Scott Minton’s remind us what is at stake: real lives and real futures. By acknowledging the shortcomings of human memory, addressing biases within current methods of obtaining eyewitness identifications, and increasing trust in emerging technologies, we can move forward to build a better legal system.
(Visited 19 times, 19 visits today)