Module 2 Lesson 2: Chat vs. Completion Models
Messages vs. Strings. Understanding the different ways LLMs process input and why Chat models are the modern standard.
Chat vs. Completion: The Language of Messages
In the early days of LLMs (GPT-3), we used "Completion" models. You gave them a string of text, and they "finished" it. Today, we use "Chat" models. You give them a List of Messages, and they "Respond" to them.
1. Completion Models (The Old Way)
- Input: "The capital of France is"
- Output: " Paris."
- LangChain class:
OpenAI()(Legacy)
2. Chat Models (The Modern Way)
- Input: A structured list of identified speakers.
- Output: An
AIMessageobject. - LangChain class:
ChatOpenAI()
3. The 3 Core Message Types
In LangChain, every conversation is built using these three types:
| Type | Role | Purpose |
|---|---|---|
| SystemMessage | The Instructions | Setting the "Identity" (e.g., "You are a helpful pirate"). |
| HumanMessage | The User | The specific query from the person chatting. |
| AIMessage | The Model | The response generated by the AI. |
4. Code Example: Structured Conversation
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a professional chef. Answer briefly."),
HumanMessage(content="How do I cook a perfect steak?")
]
response = model.invoke(messages)
print(response.content)
5. Why Roles Matter
By separating the System instruction from the Human input, the model can better differentiate between what it counts as "The Truth" (System) and "A Query" (Human). This is the first level of defense against prompt injection (Module 13).
Key Takeaways
- Chat Models are the industry standard for reasoning and interaction.
- System Messages set the ground rules and persona.
- Human Messages carry the user's specific request.
- Always use
ChatOpenAI(or equivalent) for modern agentic applications.