
Guarding Graph Boundaries
Protecting your graph from bad actors and bad logic.
Guarding Graph Boundaries
The boundary is where the user meets the AI. This is the danger zone.
The Sandwich Pattern
Wrap your core logic in guards.
graph TD
Input --> InputGuard{Safe?}
InputGuard -- No --> Block[Block Request]
InputGuard -- Yes --> CoreAgent[Core Reasoning]
CoreAgent --> OutputGuard{Safe Output?}
OutputGuard -- Yes --> User
OutputGuard -- No --> Rewrite[Rewrite Logic]
Rewrite --> OutputGuard
Input Guards (Pre-Processing)
Before the LLM even sees the message, check for PII or Prompt Injection using a cheap model or regex.
Output Guards (Post-Processing)
Before showing the answer, ensure the agent didn't "leak" keys or generate harmful content.