
Handling Uncertainty: Ambiguity, Errors, and Clarification Loops
Equip your agents to handle the messiness of the real world. Learn strategies for resolving ambiguous user intents, managing missing information, and building interactive clarification loops in Gemini ADK.
Handling Uncertainty: Ambiguity, Errors, and Clarification Loops
The real world is messy. Users provide vague instructions, APIs go down, files are missing, and data is contradictory. A junior AI "hallucinates" its way through these uncertainties, making guesses and hoping for the best. A professional Gemini ADK agent, however, is designed to manage uncertainty through explicit reasoning and interaction.
In this lesson, we will explore the techniques for identifying ambiguity, resolving missing information, and implementing "Clarification Loops"—where the agent proactively asks the human for help instead of failing or guessing.
1. The 3 Faces of Uncertainty in Agents
A. Intent Ambiguity
The user's goal is not clear.
- Example: "Send the report to John." (Which John? Which report? Via email or Slack?)
B. Information Gap
The goal is clear, but the necessary data is missing.
- Example: "Calculate the ROI for Project X." (But Project X's budget hasn't been provided).
C. Execution Failure
The plan is clear, but a "Action" failed.
- Example: "I tried to search the database, but it returned a 'Connection Timeout'."
2. The "Ask for Clarification" Pattern
This is the single most important behavior for a high-quality agent.
The Problem with "Yes-Men"
By default, LLMs try to be helpful and might assume that "John" means "John Smith from Finance" because he was mentioned 50 turns ago. This is dangerous.
The Solution: Explicit Disambiguation
You must instruct your agent: "If a request is ambiguous or there are multiple possible interpretations, STOP and ask the user for clarification before taking any action."
graph TD
A[User Request] --> B{Is Intent Clear?}
B -->|Yes| C[Execute Plan]
B -->|No| D[Reasoning: IDENTIFY Ambiguity]
D --> E[Clarification Question to User]
E --> F[User Provides Detail]
F --> B
style E fill:#4285F4,color:#fff
3. Handling Missing Information (In-filling)
When an agent realizes it is missing a piece of the puzzle, it has three choices:
- Guess (The Hallucination Path): Highly discouraged.
- Ask the User (The Interactive Path): Best for subjective data (e.g., "What flavor of cake do you want?").
- Search for it (The Autonomous Path): Best for objective data (e.g., "I don't have the sales figures. I will search the 'Finance_2026' folder").
Agent Instruction:
"Before performing a multi-step task, verify you have all required inputs. If an input is missing, check if you have a tool to find it. If no tool exists, ask the user."
4. Handling Tool Failures (Re-planning)
A professional agent treats a "Tool Error" as an Observation, not a "Crash."
The "Plan B" Execution Logic
- Observation:
ERROR: permission denied on /documents/legal/ - Agent Reasoning: "I cannot access the legal folder directly. I will try to find a public summary of the legal findings in the general directory instead."
5. Expressing Confidence and Certainty
An agent should communicate its level of certainty to the user. This builds trust.
- Low Certainty: "I am not sure if this is the correct report, but it mentions Sales. Should I use this?"
- High Certainty: "Based on the unique ID, I have found the exact document you requested."
Pro Tip: You can instruct Gemini to provide a "Confidence Score" (1-10) for its own plans. If the score is < 7, your code can automatically trigger a clarification loop.
6. Implementation: The Disambiguation Agent
Let's look at how we structure an agent that resists making guesses.
import google.generativeai as genai
model = genai.GenerativeModel('gemini-1.5-pro')
def robust_support_agent(user_input: str):
# System instructions to enforce disambiguation
system_instr = """
You are a professional assistant. You have access to a 'send_email' tool.
CRITICAL: Never send an email if the recipient or subject is ambiguous.
If multiple contacts exist with the same name, or no specific content
is provided, YOU MUST ASK the user for the missing details.
"""
model = genai.GenerativeModel(
model_name='gemini-1.5-pro',
system_instruction=system_instr
)
chat = model.start_chat()
response = chat.send_message(user_input)
# If the model asks a question instead of calling a tool, we've succeeded!
return response.text
# Use case 1: "Email Sarah the report."
# (Model should respond: "I found two Sarahs: Sarah from HR and Sarah from Eng. Which one?")
7. Managing "Noisy" and Contradictory Inputs
Sometimes users give conflicting instructions.
- User: "Calculate the total (including tax), but don't include the tax."
- Agent's Role: The agent must act as a Logical Gatekeeper.
- Response: "I noticed a contradiction in your request. Should I include the tax in the total or not?"
8. Summary and Exercises
Uncertainty is not an error; it is a Data Point.
- Clarification Loops are better than guesses.
- Missing Information should trigger an autonomous search or a human query.
- Tool Failures should trigger a re-planning step, not a system failure.
- Expressing Certainty increases user trust in autonomous systems.
Exercises
- Ambiguity Hunting: Take 5 requests you've asked an AI recently. For each one, identify at least one "Hidden Ambiguity" that a professional agent should have clarified.
- Logic Design: Create a "Confidence Threshold" logic. If the agent returns a response with a confidence score under 5, write a Python block that redirects the query to a human agent.
- Refusal Practice: Write a system prompt that forces the agent to "Politely refuse to act" if it doesn't have 100% certainty about a financial calculation.
In the next module, we move into one of the most exciting areas of AI: Multi-Agent Systems with ADK, learning how to coordinate teams of specialized agents.