Module 12 Lesson 1: Why Agents Hallucinate
·Agentic AI

Module 12 Lesson 1: Why Agents Hallucinate

Understanding the glitch. The psychological and technical causes of AI hallucinations in agentic systems.

Why Agents Hallucinate: Mapping the Glitch

A "Hallucination" isn't the AI being "Stupid." It's the byproduct of the model's core architecture. These models are built for Probability, not Database Retrieval. In an agentic system, hallucinations are dangerous because the agent can "Act" based on a lie.

1. The Probalistic Brain

LLMs do not have a database of facts. They have a "Map of Weights." When they generate text, they are just guessing the most likely next word.

  • Hallucination: The model predicts that "The price of AAPL is $100" because it has seen that sentence many times in its training data, even if the real-time tool result says something else.

2. Common Sources of Hallucinations

A. Context Overload

If you give an agent 50 documents and ask a question, the agent might get "Lost in the middle." It combines parts of document A with parts of document B to create an "Imaginary" fact.

B. Tool Result Fatigue

If a tool returns a massive JSON blob, the agent might get lazy and "Guess" the value of a key rather than reading the whole blob.

C. Prompt Contradiction

If your system prompt says "Be a pirate" and your task is "Analyze the federal tax code," the model might hallucinate "Pirate-style" tax laws that don't exist.


3. Visualizing the Accuracy Gap

graph LR
    Input[Query: Stock Price?] --> Tool[Tool: Success]
    Tool --> Map[Wait: 90% chance it's $150 based on training]
    Map --> Output[Fact: $155 / AI Says: $150]
    Output --> Failure[Hallucination Created]

4. The "Inverse Correlation" Rule

The more Autonomous you make an agent (giving it more steps to follow without checking in), the higher the Cumulative Hallucination Rate.

  • Step 1: 95% accurate.
  • Step 2: 95% accurate.
  • ...
  • Step 10: ~60% accurate.

This is why we need validation gates (Module 7).


5. Strategic Fixes

  1. Groundedness: Forces the agent to cite a specific line from a document for every claim it makes.
  2. RAG (Retrieval Augmented Generation): Providing the "Source of Truth" in the prompt rather than relying on the model's weights.
  3. Low Temperature: Setting temperature=0 to reduce the "Creativity" and "Randomness" of the next-token prediction.

Key Takeaways

  • Hallucinations are a feature of probability, not a bug in the code.
  • Cumulative errors make long-running agents less reliable than short-running ones.
  • Context management is the best defense against logical hallucinations.
  • Setting Temperature to 0 is the first step in most reliability strategies.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn