Home

Donate
Analysis

When Conversations with AI Become Evidence

Ava Malkin / Mar 20, 2026

Ava Malkin is a Communication student at Cornell University.

Photo by Zulfugar Karimov on Unsplash.

When police charged 29-year-old Jonathan Rinderknecht for starting the Pacific Palisades fire in Los Angeles in late 2025, they also ushered in a new category of evidence. Justice department officials cited Rinderknecht’s ChatGPT history, revealing questions like “Are you at fault if a fire is lift [sic] because of your cigarettes?” and generated images of a burning city. Rinderknecht’s chats with the artificial intelligence (AI) platform mark a new frontier in digital evidence: one in which AI-generated content and user interactions begin to enter the courtroom. As AI evidence increasingly shapes legal proceedings, plaintiffs, defendants, judges, and laymen alike are witnessing a techno-legal inflection point that raises difficult but essential questions about privacy, admissibility, and regulation.

Unfortunately, this case is not anomalous. Many users likely turn to AI chatbots for legal counsel. With 900 million weekly active ChatGPT users, it seems inevitable that some would seek advice from the system. According to a September 2025 OpenAI publication, the company reports that roughly half of its messages involve “asking,” and over 10% involve “expressing” personal reflections. These usage patterns demonstrate how a seemingly innocent tool can become a source of exposure, blurring the lines between casual conversations and evidence. What begins as informal or reflective dialogue can quickly transform into material with serious legal implications.

AI evidence in the courtroom

These chats become more problematic when counseling-like exchanges turn criminal. Aside from Rinderknecht, other cases have cited AI chatbot histories as evidence of alleged crimes. Ja-Zion Robertson, an 18-year-old male in Virginia, was sentenced to 25 years for first-degree murder in September 2025 after asking Snapchat’s My AI bot, “What if I shot them if they step on my property with hostile intent?”

Likewise, investigators linked 19-year-old Ryan Schaefer to a vandalism spree in October 2025 through ChatGPT messages such as, “qill I go to jail,” “I was smashing the windshields of random fs cars,” and “I got away w it last year. And I don’t think there’s any way they could know my face.” Such cases show AI as a powerful and dangerous source of evidence that can be used to uncover intent or motive relevant to establishing mens rea. As these systems operate on the surface web and expand across industries, AI risks becoming a 24/7 repository for potentially incriminating statements.

AI chatbots ability to surface private information is not limited to criminal cases. Chat logs have also been used in civil matters. For instance, in November 2025, a judge ordered OpenAI to turn over 20 million conversations to lawyers representing The New York Times Company in a copyright infringement case. The court also directed the company “to preserve and segregate all output log data that would otherwise be deleted moving forward,” establishing AI interactions as discoverable. Additionally, legal scholars warn that personal injury litigation could draw on chat histories, particularly when users treat chatbots like therapists, presenting both evidentiary opportunities and privacy concerns.

Earlier battles over digital evidence

Regardless of their source or subject, AI records represent a significant shift in evidentiary procedures, extending beyond more familiar sources such as text messages, social media posts, or home voice recordings. However, no established rules govern their regulation or standardize their admissibility.

Law enforcement has long expanded investigative tools by probing the boundaries of the right to privacy. This dynamic dates back to Katz v. United States (1967), where the Supreme Court held that electronic eavesdropping constituted an invasion of privacy. Over the past six decades, courts have increasingly accepted a wide range of technological evidence.

Internet search histories, for instance, have been utilized as circumstantial evidence in criminal prosecutions. This includes the widely-cited case of a New Jersey nurse accused of murdering her husband after investigators discovered searches such as “How to commit murder.” Courts have also incorporated voice data from smart home devices, such as Amazon Alexa or Echo. In one Arkansas case, Amazon, after initial refusal of the prosecution’s request, later provided the data and recordings from an Echo device located in the home of a defendant, who was charged with first-degree murder of his friend. Wearable technologies have also been used as evidence.

Against this backdrop, AI records could be the next development of the state’s efforts to tap into the public’s digital lives. However, the question now facing courts, lawyers, and technologists is whether AI interactions introduce distinctive risks unlike those posed by earlier technologies.

The debate over AI-related evidence

Some commentators argue that AI-related records represent a powerful new tool for investigators. RollingStone culture writer Miles Klee wrote that chatbot records are “a potential bonanza for law enforcement,” noting that they provide opportunities to prove intent in ways that earlier forms of digital evidence did not. Unlike other online activities, generative AI systems often encourage users to articulate their thoughts and questions and may affirm or build on users’ obsessions and ideas.

Other commentators warn that the growing use of AI evidence raises serious privacy concerns. Unlike conversations with lawyers or therapists, AI platforms offer no legal confidentiality. When evidence is obtained lawfully, AI companies are compelled to hand over any communications, and traditional privileges–such as attorney-client or therapist-patient–do not apply. Public awareness of these limitations is likely limited. In practice, however, AI platforms often store chat logs, making retention strong and anonymity extraordinarily difficult, despite unsuspecting users.

As a result, these saved and identifiable records can become discoverable in lawsuits, provided they are deemed relevant and not outweighed by countervailing considerations. Chat logs may therefore be subject to subpoena, seizure under warrant, or production during discovery. Even when there is an admission of wrongdoing, these records often function merely as circumstantial evidence pointing toward a confession, rather than a form of self-incrimination, as individuals retain the ability to invoke their Fifth Amendment rights in court.

Whereas those constitutional rights remain, many legal scholars question the practical adequacy of these safeguards in an era where algorithmic evidence can shape investigative and legal narratives. Building on this concern, some legal academics have suggested that our “existing legal frameworks are inadequate” for addressing AI-generated evidence, noting that AI systems are not known for ensuring fairness, nor equality in their reasoning.

This concern has prompted a limited but growing body of scholarship examining the reliability challenges and implications of AI-related evidence more generally in trials. For instance, legal experts writing for the New Journal of European Criminal Law claim that “there must be no doubt over its authenticity and integrity” for a piece of evidence to be deemed reliable; they express concern that reliability of AI evidence can be affected by miscoding, opacity issues, human users operating the tools, or even the “raw” digital data itself.

Beyond issues of data processing, there is a broader scholarly consensus that treating AI systems as inherently reliable reflects a fundamental misunderstanding of their limitations, ones that make their reliability difficult to prove or disprove. The limitations that make reliability arduous to assess include bias, “function creep” (the use of a technology beyond its original intent), lack of transparency, lack of explainability, and the sufficiency of objective testing. It remains unclear whether the adversarial system used in the US is equipped to meaningfully address these reliability uncertainties.

The future of evidence in the age of AI

Central debates concerning how best to address these challenges in supervision, surveillance, contextual interpretation, and reliability for AI evidence increasingly center on whether lawmakers should impose new safeguards on AI platforms. This includes regulating AI companies for legal privilege and/or requiring confidentiality warnings.

Whether through regulation, institutional policy, or individual caution, the need for clearer standards is becoming increasingly more evident. As courts continue to define AI’s legal role, the balance between investigative utility and privacy risks remains unsettled. Addressing such discrepancies will likely require action and participation across the digital ecosystem.

AI companies, for example, might have to adjust their policies to remain transparent about confidentiality and clearer about data retention. Businesses might need to establish internal AI policies that guide employees to treat chatbots similarly to other messaging platforms and workplace communication. Individual users might have to approach every chatbot conversation as potentially public or evidentiary.

These shifts underscore the broader transformation in digital evidence. As AI embeds itself in everyday life, it simultaneously expands the courtroom's investigative capacities and exposes new legal and technological vulnerabilities that existing evidence procedures were unprepared to address.

Authors

Ava Malkin
Ava Malkin is a Communication student at Cornell University, with minors in Law and Society, Spanish, and Information Science. Her work focuses on the intersection of technology, law, and human rights. She serves as Editor-in-Chief of legal and mental wellness publications and plans to pursue a lega...

Related

Analysis
UK Laws Do Not Provide Effective Protection From Chatbot HarmsDecember 8, 2025
Perspective
Why Simple Bot Transparency Won’t Protect Users From AI CompanionsSeptember 26, 2025
Podcast
AI Companions and the LawJune 15, 2025

Topics