Module 4 Lesson 1: Chain-of-Thought Prompts
How to force the AI to 'show its work.' Dramatically improve reasoning accuracy for math, logic, and complex planning.
Chain-of-Thought (CoT) Prompts
Chain-of-Thought (CoT) is one of the most powerful discovery in prompt engineering. It refers to the process of encouraging the AI to generate intermediate reasoning steps before providing the final answer.
1. Why it Works
LLMs predict the next token. If you ask for the final answer immediately, the model might commit to a wrong "next token" and then be forced to hallucinate a reasoning to justify it. If you force it to reason first, the "correct" final tokens become much more probable based on the reasoning it has already written.
2. The "Let's Think Step by Step" Method
Simply adding the phrase "Let's think step-by-step" at the end of a prompt significantly improves performance on benchmark logic tests.
3. The "Scratchpad" Method
For complex tasks, explicitly tell the AI to use a "Scratchpad" or "Inner Monologue."
Example Prompt:
I want you to calculate the total cost of a trip.
- Use the
<reasoning>tags to break down the costs for flights, hotels, and food.- Then, provide the final total outside of the tags.
Task: A 5-day trip to Tokyo. Flights are $1200, Hotel is $200/night, Food is $80/day.
graph TD
Prompt[Prompt] --> CoT[Chain-of-Thought Reasoning]
CoT --> Step1[Step 1: Calculate Hotel]
Step1 --> Step2[Step 2: Calculate Food]
Step2 --> Step3[Step 3: Sum Everything]
Step3 --> Final[Correct Final Answer]
4. Zero-Shot vs. Few-Shot CoT
- Zero-Shot CoT: "Let's think step by step."
- Few-Shot CoT: Provide examples of how you want the model to reason.
- "Q: Roger has 5 tennis balls. He buys 2 more cans. Each can has 3 balls. How many does he have? A: Roger started with 5. 2 cans x 3 balls = 6 balls. 5 + 6 = 11. Final answer is 11."
Hands-on: Math Problem CoT
Try this math problem without any special instructions: "A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?"
Now, try it again with: "Think through the math step-by-step and show your work."
Observe how the model handles the breakdown.
Key Takeaways
- CoT increases reasoning accuracy.
- Show your work prevents the model from committing to the wrong answer too quickly.
- Use explicit Reasoning Tags for complex planning.