Defining Agent Responsibilities: The Scope of Agency

Defining Agent Responsibilities: The Scope of Agency

Learn how to define the 'Role' of your agent to prevent scope creep and logic failures. Master the art of the System Prompt as a mission statement.

Defining Agent Responsibilities

Building an agent starts with a definition of Scope. If you tell an agent "You are an assistant that can do anything," it will likely do nothing well. In production, we build Specialized Agents.

In this lesson, we will learn how to define the "Mission Statement" for your agent and how to set the boundaries that keep it from wandering into irrelevant or dangerous logic.


1. The Persona as a Constraint

The System Prompt is not just an instruction; it is the "DNA" of your agent. A well-defined persona acts as a natural constraint on the model's reasoning.

The "Generalist" Mistake (Bad)

"You are a helpful AI assistant. Answer the user's questions using your tools."

Result: The agent might try to use a "Calculator" tool to answer a question about "French History," leading to tool errors and confusion.

The "Specialist" Pattern (Good)

"You are a Senior Security Auditor for Cloud Infrastructure. Your goal is to identify open S3 buckets and IAM misconfigurations. You ONLY use the provided security tools. If a user asks a question about unrelated topics (like weather or recipes), politely decline and redirect them to their security goals."


2. The Three Dimensions of Responsibility

When designing your agent, you must answer these three questions:

1. What is the Goal? (The Policy)

Is the agent trying to "Solve a ticket," "Write a report," or "Monitor a stream"?

  • Goal-Oriented: Stops when task is done.
  • Process-Oriented: Continuously monitors and alerts.

2. What is the Authority? (The Power)

Can the agent actually execute the trade, or does it just "propose" the trade for a human to sign off on?

  • Advisory Agent: Proposes actions.
  • Active Agent: Executes actions.

3. What is the Knowledge? (The context)

Does the agent have access to the whole company wiki, or just a specific project folder?


3. Creating the "Agent Mission Statement"

A good mission statement for an agent follows this formula: "You are a [Role]. Your objective is to [Goal] by using [Tools] while adhering to [Constraints]."

Example: The Research Agent

  • Role: Technical Market Researcher.
  • Goal: Analyze competitor pricing and features.
  • Tools: Google Search, Jina Reader (for scraping), and a Spreadsheet tool.
  • Constraints: Never use internal proprietary data. Always cite sources.

4. Avoiding "Agentic Drift"

Agentic drift occurs when an agent takes a tool output and starts chasing a "Side Quest" that is irrelevant to the main goal.

How to Prevent Drift

  1. Instruction Recall: Remind the agent of its goal at every turn of the conversation.
  2. Deterministic Routing: (Module 2.3) If an agent tries to go off-track, the Orchestrator (LangGraph) can physically stop the edge transition and force it back to the "Planning" node.

5. Implementation: The System Message

In LangChain, this is how we solidify the responsibility:

from langchain_core.messages import SystemMessage

AGENT_PERSONA = """
You are a Python Debugging Expert. 
Your primary job is to fix syntax and logic errors in provided code snippets.
- Use the 'python_repl' tool only to verify fixes.
- If the code is not Python, explain that you only handle Python.
- Do not engage in casual conversation.
"""

system_message = SystemMessage(content=AGENT_PERSONA)

Summary and Mental Model

Think of your agent like a Specialized Subcontractor.

  • You don't hire a plumber to fix your electrical wiring.
  • You give the plumber a specific contract: "Fix the leak in the kitchen."
  • You give them the tools to fulfill that contract.

By defining Responsibilities clearly, you reduce the "Reasoning Load" on the LLM, making it faster, cheaper, and much more accurate.


Exercise: Responsibility Design

  1. Design a Persona: Draft a system prompt for an agent whose job is to "Manage a developer's calendar and book 50-minute deep-work blocks."
    • Include 3 specific Constraints.
  2. Scope Creep: What happens if the calendar agent is asked: "Tell me a joke about time travel"?
    • How would you modify the prompt to ensure the agent doesn't waste tokens on a joke?
  3. Hierarchy: If you have two agents—a "Writer" and an "Editor"—what is the single most important "Responsibility" that the Editor has over the Writer?

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn