Module 3 Lesson 4: Few-Shot and Zero-Shot Techniques
How to teach the model through examples using Few-Shot prompting, and when to rely on Zero-Shot.
Few-Shot and Zero-Shot Techniques
Sometimes, instructions are not enough. You need to show, not just tell. This is where Few-Shot Prompting comes in.
1. Zero-Shot Prompting
Zero-Shot is when you ask the model to perform a task without giving it any examples. It relies entirely on its pre-existing knowledge.
- Example: "Extract all the names of cities from this text."
2. Few-Shot Prompting
Few-Shot is when you provide one or more examples of the input and the desired output. This is the most powerful way to control formatting and tone.
Example Prompt:
I want to extract sentiments from product reviews.
Review: "The screen is amazing but the battery life is poor." Sentiment: [Positive: Screen, Negative: Battery]
Review: "It was a waste of money. Everything broke." Sentiment: [Negative: Overall Value]
Review: "Great device, even though it took a while to ship." Sentiment:
graph TD
Zero[Zero-Shot] -->|Low Pattern Match| Varied[Varied Results]
Few[Few-Shot] -->|High Pattern Match| Consistent[Consistent Results]
3. When to use Few-Shot
- When you need a highly specific Output Format (like JSON or a custom table).
- When you want the AI to mimic a specific Voice (e.g., your personal writing style).
- When the task is logic-intensive and the AI keeps failing (giving examples of correct reasoning).
4. One-Shot vs. Multi-Shot
- One-Shot: Giving a single example. Usually enough for formatting.
- Few-Shot: Giving 3-5 examples. Best for complex logic or subtle tone matching.
Hands-on: Formatting with Few-Shot
Try to get ChatGPT to convert a list of names into a specific "Lastname, Firstname (Initial)" format using zero-shot. If it makes mistakes, provide 2 examples and try again.
Key Takeaways
- Zero-Shot = Faster, but unpredictable.
- Few-Shot = High reliability, consistency, and control.
- Examples are the "instruction manual" for the model's output.