Module 9 Lesson 2: Verification and Grounding
Fighting the Hallucination. Advanced techniques to ensure the AI stays strictly within the retrieved documentation.
Verifying the Truth: Grounding
Sometimes, a model gets "Overconfident." It reads your PDF about health insurance but decides to add a fun fact about health insurance in Canada that it knows from its pre-training, but isn't in your document. This is Grounding Failure.
1. The "Unknown" Instruction
The most important rule in RAG is telling the AI it's okay to say "I don't know."
- Bad Rule: "Help the user with their question."
- Good Rule: "Answer ONLY using the provided documents. If the answer is not in the documents, state clearly: 'I am sorry, but the documentation provided does not contain that information.'"
2. Context Verification Steps
In high-stakes apps (Legal/Financial), we often use a "Two-Pass" verification:
- Pass 1: The AI generates an answer from the context.
- Pass 2: A smaller, faster model (like Claude Haiku) is asked: "Does this answer contain any claims not found in the original snippets? Answer 'Verified' or 'Fail'."
3. Visualizing Self-Correction
graph TD
Data[Context Snippets] --> A[Generate Answer]
A --> V{Verification Step}
V -->|Fail| R[Regenerate / Error]
V -->|Pass| Final[Show User]
4. Bedrock Post-Processing
In the Knowlege Base configuration, you can enable Guardrails specifically for the RAG output. If the retrieved context is empty, the Guardrail can immediately block the AI from trying to guess an answer.
Summary
- Grounding means forcing the AI to stay within the provided evidence.
- Explicit "I don't know" rules prevent the most common RAG errors.
- Two-Pass verification adds an extra layer of safety for enterprise docs.
- Bedrock Guardrails can automate the rejection of unsupported answers.