Module 2 Lesson 2: Chat vs. Completion Models
·LangChain

Module 2 Lesson 2: Chat vs. Completion Models

Messages vs. Strings. Understanding the different ways LLMs process input and why Chat models are the modern standard.

Chat vs. Completion: The Language of Messages

In the early days of LLMs (GPT-3), we used "Completion" models. You gave them a string of text, and they "finished" it. Today, we use "Chat" models. You give them a List of Messages, and they "Respond" to them.

1. Completion Models (The Old Way)

  • Input: "The capital of France is"
  • Output: " Paris."
  • LangChain class: OpenAI() (Legacy)

2. Chat Models (The Modern Way)

  • Input: A structured list of identified speakers.
  • Output: An AIMessage object.
  • LangChain class: ChatOpenAI()

3. The 3 Core Message Types

In LangChain, every conversation is built using these three types:

TypeRolePurpose
SystemMessageThe InstructionsSetting the "Identity" (e.g., "You are a helpful pirate").
HumanMessageThe UserThe specific query from the person chatting.
AIMessageThe ModelThe response generated by the AI.

4. Code Example: Structured Conversation

from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage(content="You are a professional chef. Answer briefly."),
    HumanMessage(content="How do I cook a perfect steak?")
]

response = model.invoke(messages)
print(response.content)

5. Why Roles Matter

By separating the System instruction from the Human input, the model can better differentiate between what it counts as "The Truth" (System) and "A Query" (Human). This is the first level of defense against prompt injection (Module 13).


Key Takeaways

  • Chat Models are the industry standard for reasoning and interaction.
  • System Messages set the ground rules and persona.
  • Human Messages carry the user's specific request.
  • Always use ChatOpenAI (or equivalent) for modern agentic applications.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn