Module 2 Lesson 3: Temperature, Top-p, and Sampling Settings
Control the randomness and creativity of ChatGPT using technical parameters like Temperature and Top-p.
Temperature, Top-p, and Sampling Settings
When ChatGPT predicts the next word, it doesn't just pick the single most likely one. It uses Sampling to introduce variety. You can control this using two main settings: Temperature and Top-p.
1. Temperature: The "Creativity" Dial
Temperature controls the randomness of the output.
- Low Temperature (0.1 - 0.3): Makes the model more focused and deterministic. It will almost always pick the highest probability word. Best for: Fact-checking, coding, technical writing.
- High Temperature (0.7 - 1.2): Makes the model more divergent and "creative." It will take more risks. Best for: Storytelling, brainstorming, poetry.
2. Top-p (Nucleus Sampling)
Top-p is an alternative to temperature. It tells the model to only consider the top $X$ percent of probable words.
- Top-p = 0.1: Only consider words that make up the top 10% of probability mass.
- Top-p = 1.0: Consider all possible words.
graph TD
Start[Generate Next Token] --> Prob[Calculate Probabilities]
Prob --> Temp{Apply Temperature}
Temp -->|High| Random[More Randomness]
Temp -->|Low| Direct[More Predictable]
Random --> Sample[Select Token]
Direct --> Sample
3. Which one should you use?
OpenAI generally recommends changing either Temperature or Top-p, but but not both at the same time.
- Use Temperature for a general feel of "Strict vs. Creative."
- Use Top-p if you want to ensure the model never picks a "long-tail" or extremely unlikely word.
Hands-on: Simulate Temperature
While the main ChatGPT interface doesn't always show a slider, you can "simulate" this with your prompts:
- Low Temp Prompt: "Write a strictly factual summary of the French Revolution in 50 words. Be as literal as possible."
- High Temp Prompt: "Describe the French Revolution as if you were a time-traveling alien from the year 3000 who is confused about human history."
Observe how the second prompt allows the "Creative" (high randomness) side of the model to take over.
Key Takeaways
- Temperature manages the "unpredictability."
- Low settings are for accuracy and logic.
- High settings are for variety and creative exploration.