Module 4 Lesson 4: Router Chains
Dynamic Decision Making. How to use an LLM to decide which sub-chain should handle a specific user request.
Router Chains: The Branching Brain
If you have a complex application, you might have different prompts for different topics.
- Topic A: Math questions (needs strict logic).
- Topic B: Poetry (needs creativity).
- Topic C: Physics (needs formulas).
A Router Chain is a system where a "Router" LLM first decides which category the user's question belongs to, and then "Routes" the question to the correct specialized chain.
1. Why use a Router?
If you try to make one "God Prompt" that can handle Math, Poetry, and Physics, it will be mediocre at all of them. Specialized prompts are always more accurate.
2. Implementing a Basic Router with LCEL
In LCEL, we use the RunnableBranch or a simple Python function to route.
from langchain_core.runnables import RunnableBranch
# 1. Define specialized chains
math_chain = ChatPromptTemplate.from_template("Solve this math problem: {input}") | model
poetry_chain = ChatPromptTemplate.from_template("Write a poem about: {input}") | model
general_chain = ChatPromptTemplate.from_template("Answer this: {input}") | model
# 2. Define the Routing Logic (Branch)
branch = RunnableBranch(
(lambda x: "math" in x["topic"], math_chain),
(lambda x: "poem" in x["topic"], poetry_chain),
general_chain
)
# 3. Create a classifier to give us the 'topic'
classifier = ChatPromptTemplate.from_template("Classify this query as 'math', 'poem', or 'general': {input}") | model | StrOutputParser()
# 4. FULL PIPELINE
full_chain = {"topic": classifier, "input": lambda x: x["input"]} | branch
3. Visualizing the Router
graph TD
User[Human Query] --> Router[Router Model]
Router -->|Math| C1[Math Specialist Prompt]
Router -->|Art| C2[Art Specialist Prompt]
Router -->|Code| C3[Python Specialist Prompt]
C1 --> Result
C2 --> Result
C3 --> Result
4. The "Default" Route
Always define a General Chain. If the user asks something that the Router doesn't recognize (e.g., "Hi"), the system should fall back to a safe, general-purpose prompt rather than crashing or picking a random specialist.
5. Engineering Tip: Semantic Routing
Rather than looking for specific "words" (like "math"), you can use Embeddings (Module 6) to perform Semantic Routing. This is much more robust because it understands the "meaning" of the question even if the specific label isn't present.
Key Takeaways
- Routing enables specialization in multi-topic applications.
- RunnableBranch is the LCEL way to handle "If/Else" logic.
- Specialized chains are more accurate than "God Prompts."
- Always include a Fallback/Default route for unrecognized queries.