Module 7 Lesson 3: Building the RAG Chain
·LangChain

Module 7 Lesson 3: Building the RAG Chain

Piping Facts. Putting it all together into a single LCEL chain that retrieves context and generates an answer.

The RAG Chain: Connecting the Pipes

Now we combine Module 4 (Chains) with Module 7 (Retrieving). We will build a single pipeline that takes a user's question, finds the facts, and returns the answer. This is the most valuable script in modern enterprise AI development.

1. The LCEL Composition

We use a "Source Mapper" function to format our documents into a single chunk of text for the prompt.

from langchain_core.runnables import RunnablePassthrough

# 1. Utility function to join documents
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# 2. The RAG Chain
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

# 3. Execution
rag_chain.invoke("What is our return policy?")

2. Breaking Down the Flow

  1. retriever | format_docs: The retriever finds the top-k documents, and format_docs turns them into a single string.
  2. RunnablePassthrough(): This keeps the original user question.
  3. prompt: Receives context and question, and creates the message list.
  4. model | parser: The LLM answers the question, and the parser cleans the text.

3. Visualizing the Full Pipeline

graph LR
    User[Human Query] -->|Pipe| Dict[Dictionary Construction]
    Dict -->|question| P[Prompt]
    Dict -->|retriever| F[Format Documents]
    F -->|context| P
    P --> M[Model]
    M --> Out[Clean Answer]

4. Why this is superior to manual coding

  • Streaming: If you call rag_chain.stream(), the final answer will stream to the console in real-time, even though there's a retrieval step in the middle.
  • Traceability: In LangSmith, you will see exactly which documents were retrieved for the context variable.

5. Engineering Tip: "I don't know"

Always check your prompt. If your retriever finds nothing, the context will be empty. Your prompt must handle this gracefully: "If context is empty, respond by saying you don't have that information."


Key Takeaways

  • LCEL makes RAG development clean and declarative.
  • format_docs is a helper that prepares text for the model.
  • RunnablePassthrough ensures the model knows what was originally asked.
  • Streaming and Tracing are built-in features of this architecture.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn