Editor’s Note: This article was originally authored by our colleague and BARC Fellow, Douglas Laney, and was first published on Forbes.com. We are republishing it with full permission, as we believe its insights are highly relevant to the topics we cover and valuable for our community.
The LA Times’ new AI-powered “Insights” feature, designed to provide historical context on various topics, backfired when it generated content that downplayed the history of the KKK. This incident serves as a critical lesson for any business on the reputational and ethical risks of deploying generative AI without rigorous oversight.
The AI bot, when prompted, produced summaries that were criticized for being overly sanitized and for omitting key facts about the organization’s violent history. The public backlash was swift, forcing the newspaper to take the feature offline and issue an apology. This highlights a significant challenge with today’s Large Language Models (LLMs): they are not arbiters of truth and can reflect the biases, inaccuracies, and gaps present in their training data.
For businesses, the key takeaway is that you cannot simply “plug in” a generative AI model and expect it to work flawlessly, especially for sensitive topics. Human oversight, fact-checking, and “red teaming” (actively trying to make the AI fail) are not optional—they are essential components of a responsible AI strategy. Without these guardrails, companies risk producing content that is not only incorrect but also deeply offensive, causing significant damage to their brand and credibility.
The experience of the LA Times illustrates how a single point of failure can create significant reputational risk. The underlying lesson extends beyond content moderation to all critical business systems. For professionals interested in a structured approach to building a more resilient enterprise, our BARC+ subscription offers unrestricted access to our full research library. A relevant article on this topic explores the lessons learned from major operational crises.