Module 10 Lesson 4: Setting Output Expectations
How to define 'success' for the AI. Using scoring rubrics and specific criteria to ensure the output meets your needs.
Setting Output Expectations
Many users say "the output is bad" without defining what "good" looks like. In this lesson, we learn how to give the AI a Scoring Rubric.
1. The Rubric Technique
Instead of asking for a "good" article, tell the AI exactly what criteria to use.
Example Prompt:
*"Write a product description for [Product]. Your output will be evaluated on the following 0-5 scale:
- Clarity: Is the benefit immediately obvious?
- Tone: Is it premium and sophisticated?
- Action: Is there a clear Call to Action? Aim for a 5/5 score in every category."*
2. Defining "Style" by Example
Provide the AI with Anti-Examples.
- "Write this memo, but do NOT use buzzwords like 'synergy', 'leverage', or 'paradigm shift'. If you use passive voice, the output is unacceptable."
graph LR
Goal[Vague Goal] --> Generic[Generic Output]
Criteria[Specific Rubric] --> Target[Targeted Output]
3. Self-Grading
Ask the AI to grade itself before showing you the result.
- "Write the draft. Then, below the draft, provide a self-assessment of how well you followed my instructions on a scale of 1-10."
4. Expected Formats (The "Shape")
- "Provide the answer in a bulleted list of exactly 4 items. Each item must start with a verb."
Hands-on: The Rubric Challenge
- Task: "Write a 1-sentence sales pitch for a pencil."
- Criteria: It must be funny, under 10 words, and mention the eraser.
- Your Role: Act as the judge. Did it meet all three criteria? If not, ask it to try again.
Key Takeaways
- Rubrics provide a map for the AI.
- Use Anti-examples to steer away from bad habits.
- Criteria-based prompts are much more reliable than emotional prompts.