Module 3 Lesson 1: Prompt Fundamentals
The Art of Instruction. Learning the basic principles of prompt engineering: Context, Task, and Format.
Prompt Fundamentals: Talking to the Brain
A prompt is the Source Code of the AI age. Just as you learn syntax for Python, you must learn the "Grammar" of prompts to get the best results from an LLM.
1. The Three Pillars of a Great Prompt
- Context: Who should the AI be? ("You are a senior data scientist").
- Task: What exactly should it do? ("Analyze these three trends").
- Format: How should the answer look? ("Output as a JSON list").
2. Zero-Shot vs. Few-Shot
- Zero-Shot: Asking a question with no examples. ("What is 5+5?").
- Few-Shot: Giving the model 2-3 examples of how to answer.
- Ex: "Apple is a fruit. Banana is a fruit. Salmon is a..."
- The model learns the pattern and answers "fish."
3. Delimiters (Module 13 Security)
Use delimiters like ### or --- to separate your instructions from the user's data.
- "Summarize the text below.
YOUR TEXT HERE
---"*
This prevents the model from getting confused if the YOUR TEXT HERE contains its own instructions.
4. Visualizing the Prompt Construction
graph TD
Persona[Persona: You are a Historian] --> Core[Core Input]
Task[Task: Summarize 19th Century] --> Core
Format[Format: 3 Bullet Points] --> Core
Core --> LLM[Model Process]
LLM --> Result[Perfect Response]
5. Why Hardcoding is Bad
If you write: model.invoke("Tell me a joke about robots"), you have a hardcoded string. If you want to change "robots" to "lasagna," you have to edit the code. Prompt Templates (Lesson 2) solve this by allowing variables.
Key Takeaways
- Prompting is a technical skill, not just "talking."
- Always include Persona, Task, and Format.
- Few-shotting is the fastest way to improve accuracy for complex tasks.
- Delimiters protect your prompt from confusing user data.