Implementing the Research Assistant

Implementing the Research Assistant

Turn your design into a working system. In this lesson, we write the Python code for the research nodes, integrate the tools, and handle the asynchronous logic of a multi-agent system.

Implementing the Research Assistant

Now we move from the whiteboard to the code. We will implement the skeleton of our Multi-Agent Research Assistant using LangGraph.

Note: For this capstone, we are focusing on the structural logic. In your own project, you would expand these functions with your specific API keys and database paths.


1. Setting up the Graph Environment

First, we define our shared state. This acts as the "Short-term Memory" of our research team.

from typing import TypedDict, List
from langgraph.graph import StateGraph, END

class AgentState(TypedDict):
    query: str
    plan: List[str]
    context: List[str]
    report: str

2. Coding the Nodes

Each node is a Python function that takes the state as input and returns the "Update" to that state.

Node A: The Planner

def planner(state: AgentState):
    # Prompt the LLM to break the query into 3 research goals
    response = llm.invoke(f"Break this into 3 research steps: {state['query']}")
    return {"plan": response.content.split("\n")}

Node B: The Researcher (Tool Use)

def researcher(state: AgentState):
    current_step = state['plan'][0]
    # Call the Vector DB search tool
    docs = vector_db.search(current_step)
    # Call the Web Search tool
    web_results = google_search.run(current_step)
    
    new_context = f"Findings for {current_step}: {docs} {web_results}"
    return {"context": state['context'] + [new_context]}

3. Assembling the Orchestrator

This is where the magic happens. We connect the nodes using edges and define our logic flow.

workflow = StateGraph(AgentState)

# 1. Add the Nodes
workflow.add_node("planner", planner)
workflow.add_node("researcher", researcher)
workflow.add_node("editor", summarize_report)

# 2. Add the Edges (The Logic)
workflow.set_entry_point("planner")
workflow.add_edge("planner", "researcher")
workflow.add_edge("researcher", "editor")
workflow.add_edge("editor", END)

# 3. Compile the System
app = workflow.compile()

4. Handling Errors Gracefully

In a production system, a tool call will fail eventually. You should wrap your researcher node in a Try/Except block. If an API is down, the agent shouldn't crash; it should return a "Mock Observation" saying "I couldn't reach the web right now, I will proceed with local data."


5. Integrating the Client UI

Finally, you can wrap this entire app in a FastAPI endpoint. This allows you to build a beautiful React frontend to show the "Thoughts" of the agent in real-time as it works through the graph.


Summary of Implementation

  • State is the source of truth.
  • Nodes are simple Python functions that update the state.
  • Edges define the sequencing.
  • FastAPI is your bridge to the user.

In the final lesson of the course, we look at Deployment and Presentation, learning how to show your work to the world (and potential employers!).


Exercise: The Logic Puzzle

Look at the researcher code above.

  1. What happens if the planner returns 10 steps instead of 3?
  2. How would you modify the graph edges to "Loop" over the researcher node until every step in the plan is finished?

Answer Logic:

  1. Inefficiency. The code currently only processes the first step (state['plan'][0]).
  2. The Loop: You would add a Conditional Edge after the researcher. It checks if the number of entries in context matches the number of entries in plan. If not, it loops back to the researcher!

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn