Artificial Irony: Misinformation Expert’s Testimony Has Fake AI Citations

Thirteenth article of Doug Laney's series on AI and data strategy
A Stanford professor’s testimony containing fake ChatGPT citations exposes the business risks of AI “hallucinations” and highlights the need for mitigation strategies.

Editor’s Note: This article was originally authored by our colleague and BARC Fellow, Douglas Laney, and was first published on Forbes.com. We are republishing it with full permission, as we believe its insights are highly relevant to the topics we cover and valuable for our community.

In a moment of profound artificial irony, a Stanford professor and misinformation expert submitted expert testimony that included fake citations generated by ChatGPT. The incident, which occurred in a lawsuit involving a media organization, perfectly encapsulates the risks and challenges of relying on generative AI for factual research.

The professor had used ChatGPT to help prepare the testimony but failed to verify the sources it provided. As has become a well-documented issue, the AI model “hallucinated” several academic articles and authors that did not exist, creating plausible but entirely fabricated citations. This led to a motion to strike the testimony and caused significant professional embarrassment.

This case is a critical lesson for any business integrating AI into its workflows. The tendency for Large Language Models to hallucinate is not a bug to be fixed but a fundamental characteristic of their probabilistic nature. They are designed to generate convincing text, not to be factually accurate databases.

For organizations, this means that human oversight is non-negotiable. Any factual claim, citation, or data point generated by an AI must be rigorously verified before it is used in any business context, whether it’s a marketing document, a legal brief, or an internal report. Failing to do so exposes the organization to risks of misinformation, legal liability, and a loss of credibility.


The challenge of AI “hallucinations” is more than a technical quirk; it creates significant legal and intellectual property risks. Who is liable when an AI generates false or copyrighted information? For professionals seeking clarity on these complex issues, our BARC+ subscription offers unrestricted access to our full research library. A relevant analysis on this topic is our guide to AI and intellectual property.

DATA festival online: Join us virtually – for free!
Event | October 21, 2025 | Online
Are you ready to harness the full potential of data and AI for your organization? DATA festival brings together the brightest minds in the data community for an immersive experience filled with cutting-edge insights and collaborative opportunities.
Keep an eye out for updates!

Discover more content

About the author(s)

Senior Analyst Data & AI

Douglas Laney is a renowned thought leader and advisor on data, analytics, and AI strategy. He is a best-selling author, as well as a featured speaker and business school professor. Laney has been recognised repeatedly as a top-50 global expert on data-related topics and is a three-time Gartner annual thought leadership award recipient. He originated the discipline of infonomics – recognising and treating data as an actual economic asset. Laney continues to focus on helping organisations and their leadership innovate with and optimise the value of their data assets.

Our newsletter is your source for the latest developments in data, analytics, and AI!