
Shared vs. Private Context: Managing the Common Ground
Learn the privacy and efficiency trade-offs of multi-agent memory. Master the art of 'Selective Knowledge Sharing' to prevent agent confusion and token bloat.
Shared vs. Private Context: Managing the Common Ground
When multiple agents cooperate, they need a "Common Operating Picture." If one agent learns a crucial fact (e.g., "The server is down"), the other agents shouldn't have to re-discover it.
However, if you share everything with every agent (The "Global Shared Context"), you create a Context Avalanche. Every turn for every agent becomes massively expensive as they read about facts they don't need to do their specific sub-job.
In this lesson, we learn Selective Knowledge Sharing. We’ll differentiate between Private Context (Agent-specific) and Shared Context (Global), creating a high-efficiency information ecosystem.
1. The Strategy of "Need-to-Know"
Information in a multi-agent system should be treated like a security clearance.
- Private Context: The agent's own internal reasoning, intermediate tool outputs, and draft work. (90% of data).
- Shared Context: Final decisions, discovered system-wide facts, and changes to the user's core request. (10% of data).
The Goal: Keep the Shared Context small so that it can be injected into all agents without breaking the bank.
2. The Shared State Hub (Broadcast Pattern)
Instead of passing data directly, use a Broadcast Hub.
- Agent A discovers an important fact.
- Agent A outputs:
SHARE: DB_TIMEOUT=5sec. - Supervisor adds this to the Global State.
- Agent B is invoked. Its prompt includes:
System Update: [DB_TIMEOUT=5sec].
graph TD
A1[Researcher] -- Private History --> A1
A2[Coder] -- Private History --> A2
A1 -->|Signal| HUB[Shared State Hub]
A2 -->|Signal| HUB
HUB -->|Brief Prefix| A1
HUB -->|Brief Prefix| A2
3. Implementation: Selective State Merging (LangGraph)
In LangGraph, you can define State fields that are "Local" or "Global."
Python Code: Merging logic
def researcher_node(state):
# This node has its OWN private local state for searching
local_history = state.get("researcher_private_log", [])
# ... performs search ...
# We return the new private log BUT also a 'Global Summary'
return {
"researcher_private_log": local_history + ["Found SKU-123"],
"global_knowledge": state["global_knowledge"] + ["SKU-123 is BLUE"]
}
4. Pruning the "Shared" Context
Even Shared Context can grow too large. You must periodically De-duplicate it.
- Before: "The meeting is at 5pm", "The room is 302", "Actually, meeting is 6pm."
- After (Consolidated): "Meeting: 6pm, Room 302."
By performing a "Consolidation Pass" on your shared hub every few turns, you ensure that every agent in the fleet stays efficient.
5. Token ROI: Selective vs. Global
In a 5-agent system with 10 turns each:
- Global approach: Every turn includes ALL previous 50 msgs. Total: 250,000 tokens.
- Selective approach: Every turn includes 5 local msgs + 3 global facts. Total: 25,000 tokens.
- Savings: 90%.
6. Summary and Key Takeaways
- Private by Default: Don't share raw tool logs or intermediate thoughts.
- Signal-Only Sharing: Only broadcast the "Final Fact" to the fleet.
- Periodic Consolidation: Prune and de-duplicate the shared state regularly.
- Agent Focus: Smaller prompts lead to better focus and fewer "Hallucinated Logic" errors in specialists.
In the next lesson, Cost Attribution in Multi-Agent Graphs, we look at چگونه to track which agent is "The Expensive One."
Exercise: The Knowledge Filter
- Imagine a team of 3 agents: a Designer, a Coder, and a Writer.
- The Designer finds that the user likes the "Dark Theme."
- The Coder finds a "Syntax Error in line 45."
- Which findings should be Global?
- Hint: Does the Writer need to know about the syntax error?
- Does the Coder need to know about the Dark Theme?
- Answer: Dark Theme is Global; Syntax Error is Private to the Coder.
- Calculate the tokens saved if the Writer doesn't have to read about line 45.