Module 4 Lesson 1: Agents in LangChain
·Agentic AI

Module 4 Lesson 1: Agents in LangChain

The framework of choice. Understanding the high-level Agent abstractions in LangChain.

Agents in LangChain: The High-Level View

Until now, we have been building agents "manually" with Python loops and JSON parsers. While this is great for learning, it's exhausting for production. LangChain is the most popular framework for building AI applications because it provides standardized "Lego blocks" for agents.

1. The Core Abstraction: Agent

In LangChain, an Agent is not the loop itself. It is the Reasoning logic that takes the history and determines the next action.

  • You pass it a list of Tools.
  • You pass it an LLM.
  • It outputs a ToolCall or a FinalResult.

2. The AgentExecutor

To make the agent actually "Run" in a loop, we wrap it in an AgentExecutor.

  • The Executor handles the "Thought-Action-Observation" while-loop.
  • It handles Error Messages (e.g., "The tool returned an error").
  • It handles Timeouts (e.g., "Stop after 10 loops").

3. Types of Agents in LangChain

Depending on which model you use, you pick a different agent type:

  • OpenAI Functions Agent: Uses the native JSON tool-calling of OpenAI models. (Highly Reliable).
  • XML Agent: Best for Claude or models that prefer XML tags.
  • Structured Chat Agent: For models that are good at following complex instructions but don't have native "Function Calling."

4. Visualizing the LangChain Stack

graph TD
    User[Human Message] --> Exec[AgentExecutor]
    subgraph internal
        Exec --> Agent[Agent Reasoning]
        Agent --> LLM[LLM Brain]
        LLM --> Agent
        Agent --> Tools[Toolbox]
        Tools --> Agent
    end
    Exec --> Out[Final Response]

5. Code Example: A Minimal LangChain Agent

from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

# 1. Define the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# 2. Define the Tools (Placeholder)
tools = [] 

# 3. Create the Prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("placeholder", "{chat_history}"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

# 4. Initialize the Agent
agent = create_openai_functions_agent(llm, tools, prompt)

# 5. Initialize the Executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 6. Run
# agent_executor.invoke({"input": "Hello agent!"})

Key Takeaways

  • LangChain separates the Reasoning (Agent) from the Execution (Executor).
  • AgentExecutor handles the repetitive "loop" boilerplate code.
  • The agent_scratchpad is where the model stores its "Thoughts" and "Observations."
  • LangChain allows you to swap LLMs or Tools without rewriting your entire control loop.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn