Module 4 Lesson 1: What are Chains?
The Connection Logic. Understanding how LangChain 'links' prompts, models, and output parsers into a single executable object.
What are Chains? The "Lang" in LangChain
In the previous modules, we handled Prompts and Models as separate variables.
text = prompt.format(topic="...")response = model.invoke(text)
This is fine for one step. But what if you have 10 steps? What if you want to add a logging step, a parsing step, and a database step? Manual management becomes a mess. Chains allow you to "Pipe" these steps together into a single, reusable object.
1. The Unix Philosophy
LangChain follows the Unix "Pipe" philosophy: Input | Step1 | Step2 | Output.
The output of the Prompt becomes the input of the Model. The output of the Model becomes the input of the Parser.
2. Introducing LCEL (LangChain Expression Language)
LCEL is the modern way to build chains. It uses the | (pipe) operator. It is significantly faster and more readable than the older, legacy LLMChain classes.
# The LCEL Way
chain = prompt | model | parser
3. Visualizing the Pipeline
graph LR
Input[User Variable] --> P[Prompt Template]
P -->|Formatted String| M[LLM Model]
M -->|AIMessage| O[Output Parser]
O --> Final[Clean Python Object]
4. Why use Chains instead of simple Python calls?
- Streaming: If you pipe a prompt to a model, the chain automatically supports streaming out of the box.
- Async: Chains are natively asynchronous.
- Observability: Tools like LangSmith can "see" inside the pipe and show you where it broke.
- Parallelism: Chains can automatically run independent steps in parallel.
5. The "Standard" Chain
The most common chain you will ever build consists of three parts:
- Prompt: Converts variables to a string.
- Model: Converts a string to an AI response.
- StrOutputParser: Converts an AI response into a simple string.
Key Takeaways
- Chains are the fundamental unit of work in LangChain.
- LCEL is the language used to build these chains using the
|operator. - Chains automate the "Hand-off" between different AI components.
- Using chains enables Streaming, Async, and Tracing automatically.