When Insights Lack Oversight: LA Times AI Bot Coddles The Klan

Seventh article of Doug Laney's series on AI and data strategy
The LA Times’ AI feature generated content that downplayed the history of the KKK, serving as a lesson on the reputational and ethical risks of generative AI without human oversight.

Editor’s Note: This article was originally authored by our colleague and BARC Fellow, Douglas Laney, and was first published on Forbes.com. We are republishing it with full permission, as we believe its insights are highly relevant to the topics we cover and valuable for our community.

The LA Times’ new AI-powered “Insights” feature, designed to provide historical context on various topics, backfired when it generated content that downplayed the history of the KKK. This incident serves as a critical lesson for any business on the reputational and ethical risks of deploying generative AI without rigorous oversight.

The AI bot, when prompted, produced summaries that were criticized for being overly sanitized and for omitting key facts about the organization’s violent history. The public backlash was swift, forcing the newspaper to take the feature offline and issue an apology. This highlights a significant challenge with today’s Large Language Models (LLMs): they are not arbiters of truth and can reflect the biases, inaccuracies, and gaps present in their training data.

For businesses, the key takeaway is that you cannot simply “plug in” a generative AI model and expect it to work flawlessly, especially for sensitive topics. Human oversight, fact-checking, and “red teaming” (actively trying to make the AI fail) are not optional—they are essential components of a responsible AI strategy. Without these guardrails, companies risk producing content that is not only incorrect but also deeply offensive, causing significant damage to their brand and credibility.


The experience of the LA Times illustrates how a single point of failure can create significant reputational risk. The underlying lesson extends beyond content moderation to all critical business systems. For professionals interested in a structured approach to building a more resilient enterprise, our BARC+ subscription offers unrestricted access to our full research library. A relevant article on this topic explores the lessons learned from major operational crises.

In this series

Premium content.
Unlock with BARC+
948,00 € (plus statutory VAT / year)
For anyone who wants to know what really drives the data & analytics world.

Discover more content

About the author(s)

BARC Fellow

Douglas Laney is a renowned thought leader and advisor on data, analytics, and AI strategy. He is a best-selling author, as well as a featured speaker and business school professor. Laney has been recognised repeatedly as a top-50 global expert on data-related topics and is a three-time Gartner annual thought leadership award recipient. He originated the discipline of infonomics – recognising and treating data as an actual economic asset. Laney continues to focus on helping organisations and their leadership innovate with and optimise the value of their data assets.

Our newsletter is your source for the latest developments in data, analytics, and AI!