作者:Matthew Feinberg
Artificial intelligence (AI) is rapidly transforming the modern-day legal landscape, offering tools for research, drafting, document review, and jury selection, and even predicting case outcomes. If used correctly and with robust safeguards, some attorneys and litigants may benefit from the use of AI to streamline litigation projects. However, AI also can pose serious risks and dangerous pitfalls when misused or used without adequate quality control measures in place. Recently, for instance, dozens of attorneys have faced sanctions in federal court for filing briefs containing fake (“hallucinated”) case citations or incorrect statements of the law generated by AI. And the Government Accountability Office (GAO) has scolded protesters about the use of AI for drafting bid protests for similar reasons. Outside the parameters of legal research and drafting, litigants are now using AI for more novel and potentially even more troubling purposes. One such example is a recent Arizona criminal case where the family of a crime victim offered an AI-generated victim impact statement for use at sentencing. The use of AI to provide a witness statement raises serious concerns for the accuracy of the information provided and the fairness of the proceedings, begging the question: Does the use of AI in litigation represent true artificial intelligence, or is it artificial interference preventing a just outcome?
Recent Applications
Today, AI is commonly used by lawyers and litigants to streamline otherwise time-consuming legal tasks like legal research, brief writing, and synthesizing or summarizing voluminous document productions. In addition, AI tools now are being integrated into client-facing interfaces such as chatbots for legal intake. In theory, these tools can help assess the merits of potential cases and streamline client onboarding. However, as noted above, AI remains a flawed practice companion. In addition to the possibility of hallucinated case citations and incorrect legal analysis, the use of AI introduces data privacy concerns and risks misadvising individuals due to overly generalized AI conclusions. Ensuring human oversight in these interactions remains critical to maintaining legal integrity.
Earlier this year, an Arizona criminal case may have shifted the legal-AI landscape dramatically. In 2021, in Chandler, Arizona, Christopher Pelkey was shot by Gabriel Paul Horcasitas during a road rage incident. Horcasitas was eventually convicted for the killing. At sentencing, crime victims (or their families, as may be appropriate) generally are entitled to give victim impact statements, i.e., written or oral statements describing how the crime affected their life, which are submitted to the judge to consider during sentencing. For Horcasitas’s sentencing in May 2025, Pelkey’s sister prepared and played for the sentencing judge an AI video which depicted her deceased brother speaking to the camera as if he were offering his own words. To create the video, she used AI programs to combine photographs, videos, and audio clips. She altered portions of his image, such as removing his sunglasses and trimming his beard, and she recreated his laugh. The resulting image of her brother recited a script that she wrote. Experts believe the case represented the first instance where an AI-generated video of the victim was used for purposes of a victim impact statement.
The judge commented on his appreciation for the video, then sentenced Horcasitas to 10.5 years in prison for manslaughter. Although the defense attorney does not appear to have objected to the use of the video at the sentencing hearing (possibly dooming any appeal), questions remain as to whether the video was an appropriate victim impact statement and fair to the defendant. As noted, the AI video was not actually the victim himself; it was an approximation, bearing an altered image and a statement written by someone else. Would the victim actually have given the statement attributed to him? Would he have come across as credible, likable, and admirable as the video made him out to be?
Victim impact statements are not formal evidence, and they are submitted to a judge, not a jury. Therefore, the risk of the ultimate decision-maker giving undue weight to a statement manufactured through AI is somewhat lessened. That said, if AI can be used for victim impact statements—to create or approximate facts, to manipulate emotion, or to drive outcomes—it could open the door to risks of undue influence and unfairness.
Potential Future Applications
If an AI-generated video can be used for a victim impact statement, it is no great leap to expect attorneys will attempt to use AI to assist in similar contexts, if a court allows it. For instance, a litigant could offer an AI-generated video of a witness’s deposition testimony. Under existing rules of evidence, most states allow deposition transcripts of opposing parties to be read into the record without that party testifying live. In some circumstances, third-party witness testimony can be read into the record when that witness is unavailable to testify. But AI-generated video or audio, complete with synthesized voice, tone, and body language, adds a new layer of complexity and risk. Jurors and judges often assess credibility based not just on words, but on a witness’s demeanor and delivery. An AI-generated version might convey emotion or nuance that the real witness never expressed, thereby changing the perceived truthfulness or weight of testimony. This could tip the scales in close cases, threatening the overall fairness of proceedings.
In another trial, a litigant may attempt to use AI-enhanced or -generated versions of evidence to provide a clearer picture of that party’s story of the facts. In a Seattle-based trial, for instance, a criminal defendant attempted to offer an AI-enhanced version of a smartphone video as evidence, claiming the original video was low resolution and blurry, whereas the AI video offered a “more attractive product for a user.” The court ultimately denied admission of the video because AI enhancement is not seen as sufficiently reliable in the relevant scientific community. However, over time, that may change. Inevitably, AI technology will improve to a point where it generally may be considered reliable by industry experts. When that happens, AI enhancement will be susceptible to the same risks as AI-generated witness testimony. Are the actual facts as they are depicted in the video? Or are they manipulated and colored by a litigant’s self-serving narrative of the facts? Therein lies the risk of allowing AI-generated witness testimony or AI-enhanced evidence in litigation. The ability to use AI to manipulate information to enhance a litigant’s storytelling or create evidence that does not actually exist moves across the line from artificial intelligence to artificial interference with the opponent’s right to a fair trial.
Key Takeaways for Litigants