How to Prevent Hallucinations in Generative AI Systems

Reading time: 5 minutes
Generative AI promises automation and innovation, but hallucinations threaten to undermine trust in business-critical systems. At DATA festival online, Google Engineering Leader Nitesh Singhal shared a proven framework to transform unreliable models into dependable business assets.

At this year’s DATA festival online, Nitesh Singhal, Engineering Leader at Google, introduced a methodical perspective on generative AI. With five years of experience building large-scale machine learning systems, Singhal addressed a primary challenge for data and AI leaders: model hallucinations. While generative AI can automate complex tasks and accelerate workflows, it can also produce incorrect information with confidence.  

Singhal’s session provided a framework for transforming unreliable AI prototypes into trustworthy business systems. His message was clear: hallucinations are solvable, but only if you build the right foundation.

Why Do Generative Models Produce Incorrect Outputs?

Hallucinations occur when an AI generates outputs that sound plausible but are factually incorrect. Singhal offered clear examples, such as an AI claiming Einstein won an Oscar or that Sydney is the capital of Australia.  

He explained that these are not software bugs but byproducts of how large language models operate. The models function as advanced autocomplete systems that predict the next word based on statistical patterns in their training data, without a genuine understanding of factual reality. 

The business risks are significant. Hallucinating AI can damage brand reputation, create financial losses and lead to serious legal troubles. Singhal illustrated these risks with recent examples: a large healthcare company recently issued a public apology after providing false medical advice, and a major legal firm cited fake cases.

Four Pillars for Building Trustworthy AI Systems 

The good news: this can be avoided. Building reliable AI is like constructing a house: the foundation matters. Singhal presented four foundational pillars that provide the structural integrity.

How to Prevent Hallucinations in Generative AI Systems

Data Integrity

As Singhal put it, “garbage in, garbage out.” Outdated or biased data will lead to poor-quality outputs. For example, an AI trained on decade-old credit data cannot provide useful financial advice today. The data must be kept current, relevant, and secure.

How to Prevent Hallucinations in Generative AI Systems

Model Selection

It is important to match the correct model to a task. Using an unnecessarily large or complex model for a simple task is inefficient and costly. A balance must be found between performance, cost, and explainability.

How to Prevent Hallucinations in Generative AI Systems

Validation and Guardrails

AI-generated responses must be tested against verified information. Systems should be in place to monitor for unusual outputs and cross-check results with trusted sources. Think of guardrails as seatbelts and airbags for your generative system.

How to Prevent Hallucinations in Generative AI Systems

Human Oversight and Governance

Humans in the loop are essential for high-stakes outputs. Introducing feedback loops helps models improve over time, and a clear governance structure establishes accountability. AI should be viewed as a tool to augment human expertise, not replace it.

Three Proven Strategies to Reduce AI Hallucinations

While these foundational pillars are essential, they are not sufficient on their own. Singhal transitioned from principles to practical application by presenting three proven strategies: 

  • Retrieval-Augmented Generation (RAG): This technique allows the AI to retrieve facts from a reliable, curated knowledge base before generating a response. This grounds the output in verified data and enables transparency by citing sources. 
  • Fine-tuning: This process specializes a foundational model for a specific domain. By training the model further on a smaller, high-quality dataset, its accuracy and relevance for that domain are improved. 
  • Automated verification and user feedback: This creates a self-improving system. Automated checks can flag potential hallucinations and assess the logic, tone, and accuracy of outputs, while feedback from users helps refine the model continuously. 

Case Study: Transforming Unreliable AI into a Trusted Business Tool

Now think about your last AI project: Where did it stumble? What nearly derailed it? The answer probably involves trust issues when the AI got things confidently wrong. 

Singhal shared a story that likely sounds familiar: A financial firm that deployed an AI system to support its advisors. The initial implementation suffered from frequent hallucinations, which eroded trust and put the project at risk of cancellation. 

To solve the problem, the team applied the principles Singhal outlined. They built a curated knowledge database for the AI, implemented RAG to ground its responses in verified facts, fine-tuned the model on domain-specific financial data, and added a human-in-the-loop approval step for critical outputs. As a result, factual errors dropped by over 95 percent, advisors’ workflow speed increased by 63 percent, and the quality of advisory services improved. The AI was transformed from a liability into a reliable tool for the firm’s financial advisors.

Implementing Reliable AI: Your Next Steps

Singhal’s session delivered a clear message: hallucinations are not an unsolvable AI problem. They’re a design challenge that requires the right foundation and practical strategies. 

Begin with the four pillars: data integrity, model selection, validation, and human oversight. These are the essential structural elements for a reliable AI system. Then, apply the three proven strategies: RAG for grounded responses, fine-tuning for domain expertise, and automated verification for continuous improvement.  

As the case study shows, this approach works. However, it requires a disciplined process. It is best to start with small, controlled projects to prove value before scaling responsibly.

Inspired by sessions like Nitesh Singhal’s?

Sessions like Nitesh Singhal’s highlight the value of sharing real-world solutions. The DATA festival provides a platform for practitioners to address these complex challenges together. Become part of this vibrant community driving the future of data and AI and get your ticket for the DATA festival Munich on June 16-17, 2026! 

Nitesh Singhal, Engineering Leader | Google
Follow Nitesh's work at niteshsinghal.me for more insights on AI infrastructure and Generative AI platforms.

Discover more content

Author(s)

Marketing & Content Manager

Sophia is responsible for maintaining the BARC website and for search engine optimization (SEO). She also advances the ongoing development of our content strategy and optimizes internal processes to improve efficiency.

Our newsletter is your source for the latest developments in data, analytics, and AI!