Module 5 Lesson 3: Hallucinated Actions
Ghost tools. How to handle agents that try to execute functions that don't exist in their toolbox.
Hallucinated Actions: The Ghost in the Machine
A "Normal" hallucination is when an LLM lies about a fact (e.g., "The capital of Mars is Elonville"). An Agentic Hallucination is when the LLM lies about its own capabilities. It tries to use a tool that does not exist.
1. What a Hallucinated Action looks like
You give the agent a Search tool and a Write_File tool.
The Agent's output:
"Thought: I have the data. Now I will send an email to the customer.
Action: send_email
Action Input: {'to': 'user@example.com', 'body': 'Hello!'}"
The Error: Your code crashes because you do not have a send_email function defined.
2. Why Agents "Make Up" Tools
- Training Data Bias: Models like GPT-4 have seen millions of lines of code. They "know" that a
send_emailorget_database_rowfunction usually exists in a programmer's world. - Over-Helpfulness: The model wants to reach the final goal so badly that it "Invents" a shortcut to get there.
- Ambiguous Interface: If you don't use "Native Function Calling," the model is just writing text. It doesn't "know" it's restricted to a specific list unless the prompt is very strict.
3. Detecting Ghost Tools
Your "Control Loop" (Module 2) must be defensive.
def control_loop(agent_choice):
available_tools = ["search", "calculator"]
if agent_choice["action"] not in available_tools:
# 1. Stop the execution
# 2. Feed an error BACK to the agent
return f"Error: The tool '{agent_choice['action']}' does not exist. Please only use: {available_tools}"
4. Preventing Hallucinated Actions
A. Use Smaller, Specialized Toolboxes
Don't give one agent 50 tools. Give it 5. The more choices you give an LLM, the higher the "Interference" between tool names, and the higher the chance of it hallucinating a hybrid tool.
B. The "Available Tools" Reminder
In every turn of the chat history, remind the model of its limits: "Current State: You are in a restricted environment. You have 3 tools: [A, B, C]. Any other tool call will result in a system failure."
5. Visualizing the Validation Gate
graph TD
LLM[Agent Output] --> Gate{Is Tool in Registry?}
Gate -- Yes --> Run[Execute Code]
Gate -- No --> Feedback[Tell Agent: Tool Unknown]
Feedback --> LLM
Key Takeaways
- Hallucinated Actions are attempts to use non-existent tools.
- They are driven by the model's prior training on generic codebases.
- Always implement a Validation Gate in your control loop.
- Narrow toolkits are significantly more reliable than broad ones.