
Few-Shot Prompting with Graph Samples: Learning the Grammar
Model the behavior. Learn how to use Few-Shot examples to teach your LLM how to parse graph triplets and follow multi-hop reasoning paths correctly.
Few-Shot Prompting with Graph Samples: Learning the Grammar
An LLM is a generalist. It knows how to write a poem or a summary, but it doesn't intuitively know how to "Reason over a Directed Acyclic Graph." If you just give it context and a question, it might miss the subtle logic of a "Reverse Edge" or a "Temporal Constraint." To fix this, we use Few-Shot Prompting. We give the AI 2-3 examples of a Question -> Graph Subgraph -> Correct Answer chain.
In this lesson, we will look at how to build a Graph-Specific Few-Shot Library. We will learn which types of examples are most effective for teaching "Multi-hop Reasoning" and how to use these examples to guide the AI toward a structured, traceable response.
1. Why Zero-Shot Fails for Graphs
Zero-Shot Prompt: "Here is the graph. Answer the question."
AI Failure: The AI might see (A)-[:OWNS]->(B) but say "B is the boss of A" because it's used to seeing power relationships in text, ignoring the direction of the arrow.
Few-Shot Fix: We provide an example where a similar "Arrow Conflict" is resolved correctly.
- Example 1:
(John)-[:OWNS]->(Company). - Check: Who is the owner?
- Answer: John.
2. The Anatomy of a Graph Few-Shot Example
- The Question: Simple and clear.
- The Graph Evidence: A small set of 3-5 triplets.
- The Reasoning Path: A "Chain of Thought" explaining the walk.
- The Final Answer: Concise and grounded.
Pro Tip: Always include an example where the answer is Unknown. This teaches the AI not to "Hallucinate" a bridge where one doesn't exist in the graph.
3. Dynamic Few-Shot Selection
Don't use the same 3 examples for every query.
- If the user asks a Temporal question, retrieve 3 few-shot examples that involve "Dates" and "Sequence."
- If the user asks a Path question, retrieve examples showing "Multi-hop" reasoning.
This is called In-Context Learning (ICL) optimization.
graph TD
User[Query: How many?] --> Classifier[Intent: Counting]
Classifier --> Library[Few-Shot Library]
Library -->|Fetch| E1[Count Example]
Library -->|Fetch| E2[Count Example 2]
E1 & E2 --> LLM[Final System Prompt]
style E1 fill:#34A853,color:#fff
style LLM fill:#4285F4,color:#fff
4. Implementation: A Graph Few-Shot Template
EXAMPLE 1:
QUESTION: Who is the senior leader for the Marketing project?
GRPAH CONTEXT:
(Sudeep)-[:ROLE]->(Lead)
(Sudeep)-[:WORKS_ON]->(Marketing)
REASONING: The graph shows Sudeep works on Marketing and his role is Lead.
ANSWER: Sudeep is the senior leader.
---
// REAL QUERY //
QUESTION: {user_query}
GRAPH CONTEXT:
{retrieved_graph}
REASONING:
ANSWER:
5. Summary and Exercises
Few-shot prompting is the "Instruction Manual" for the AI's brain.
- Directionality is the most common error in zero-shot graph RAG.
- Negative examples ("I don't know") prevent hallucinations.
- Chain of Thought (CoT) in examples teaches the AI how to "walk."
- Dynamic selection ensures the examples are relevant to the user's current logic puzzle.
Exercises
- Example Writing: Write a few-shot example that teaches an AI to find the "Founder" of a company using a 2-hop path:
(Person)-[:INVESTED_IN]->(Company)AND(Person)-[:AUTHORED]->(Patent). - The "Hallucination" Check: If you add an example where the AI says "The graph does not provide enough evidence," how does that change the bot's behavior for vague questions?
- Visualization: Draw a "Library" of 10 examples. Tag each with a "Topic" (e.g., Dates, People, Sums).
In the final lesson of this module, we will look at data formats: Structured Context: YAML vs Markdown vs JSON for Graphs.