Module 6 Lesson 5: Entry, Exit, and Guard Nodes
·Agentic AI

Module 6 Lesson 5: Entry, Exit, and Guard Nodes

Managing the perimeter. Using specialized nodes to filter inputs and validate final results.

Entry, Exit, and Guard Nodes: Protecting the Graph

A graph is like a building. If you don't control the Entrance and the Exit, it's not secure. In LangGraph, we use specialized nodes to ensure that only valid data enters our agent's brain and only high-quality data leaves it.

1. The Entry Point: Input Sanitization

Don't let the user's raw prompt hit your expensive LLM brain immediately. Use an Entry Node to validate the input.

  • Check: Is the prompt a "Jailbreak" attempt?
  • Check: Is the prompt asking about a topic we are forbidden from discussing?
  • Action: If the input is "Bad," route immediately to an Exit_Node with an error, bypassing the expensive AI reasoning.

2. Guard Nodes: The Internal Checkpoints

A Guard Node is a node that doesn't "Do" any work. It only "Checks" the work done by a previous node.

  • Example: An Agent_Node writes some Python code. The Guard_Node then runs a linter or a security scanner on that code before it is actually executed.

3. The Exit Node: Final Verification

Before you show the answer to the user, the Exit Node performs one last check.

  • Fact-Checking: Does the answer align with our RAG documents?
  • Format Check: If the user asked for "ONLY JSON," does the final text contain only valid JSON?

4. Visualizing the Perimeter

graph LR
    START --> GuardIN[Entry Guard: Check for Safety]
    GuardIN -- Pass --> Brain[LLM Thinking]
    GuardIN -- Fail --> ERR[Exit: Forbidden Topic]
    Brain --> GuardOUT[Exit Guard: Check Factuality]
    GuardOUT -- Bad --> Brain[Refine Thought]
    GuardOUT -- Good --> END

5. Code Example: A Safety Guard Node

def input_guard(state: State):
    user_input = state["messages"][-1].content
    forbidden_words = ["hack", "illegal", "exploit"]
    
    if any(word in user_input.lower() for word in forbidden_words):
        return {"messages": ["Error: I cannot assist with that request."], "is_safe": False}
    
    return {"is_safe": True}

# In the graph configuration
workflow.add_conditional_edges(
    "input_guard",
    lambda x: "continue" if x["is_safe"] else "end",
    {"continue": "agent_node", "end": END}
)

Key Takeaways

  • Entry Points are the first line of defense against bad prompts.
  • Guard Nodes act as internal "Quality Gates" within the graph.
  • Exit Verification prevents hallucinations from reaching the end user.
  • Separating Safety into its own node makes the overall system more modular and easier to update.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn