Multi-agent systems are only as good as their delegation patterns. Poor delegation turns multi-agent into expensive chaos.
The Delegation Problem
Delegation is the challenge of getting the right task to the right agent with the right context. In multi-agent systems, delegation failures cascade. An agent that receives the wrong task produces wrong output. An agent that receives the right task but wrong context produces unreliable output.
Effective delegation requires clear task boundaries, explicit context passing, and explicit output expectations.
Every delegation involves four components that must be specified clearly. Task definition specifies what needs to be done, providing enough clarity for the agent to understand the objective. Context transfer specifies what information the agent needs to accomplish the task, including relevant history, constraints, and background. Output specification defines what format and content is expected from the agent's work. Escalation path specifies what to do if the agent cannot complete the task, ensuring failures are handled gracefully rather than causing system-wide problems.
Task Definition Patterns
Task Assignment
Tasks should be assigned to agents based on capability matching. Define agent capabilities explicitly and match tasks to capabilities.
Agent Registry:
- classifier: email classification, sentiment analysis
- responder: response generation, template filling
- database: CRUD operations, data validation
- escalation: complex reasoning, human handoff
Task: "Classify incoming email"
Match: classifier agent
Task Decomposition
Complex tasks should be decomposed into simpler subtasks. Each subtask should be independently delegable.
Complex task: Handle delivery exception email
Decomposition: The complex task breaks down into extracting email content handled by the classifier agent, classifying exception type also by the classifier agent, generating a response by the responder agent, validating the response by the database agent, sending the response by the database agent, and updating records by the database agent. Each step is a separate task that can be delegated, retried, or escalated independently, providing flexibility and fault tolerance.
Context Passing Patterns
Context Isolation
Agents should receive only the context they need. Passing unnecessary context wastes tokens and can introduce noise. Passing insufficient context produces incomplete results.
Context Passing Techniques
Context passing techniques vary in their tradeoffs. Full context passes the complete state, which is simple but expensive in terms of tokens and processing time. Relevant subset passes only the relevant state, which is efficient but requires filtering logic to determine what matters. Reference-based context passes pointers to state, which is efficient but requires state management infrastructure. Accumulated context passes a summary plus raw data, balancing completeness with efficiency.
Context passing should be made explicit in the delegation protocol so both sending and receiving agents understand what is being transferred. Validate context completeness at the receiving agent before attempting to use it, catching missing context issues early. Log context passing for debugging since context-related issues are often difficult to diagnose after the fact. Set context size limits to prevent overflow and ensure predictable resource usage.
Tool Use Patterns
Agents use tools to interact with external systems. Tool use patterns determine how tools are discovered, invoked, and managed.
Tool Discovery
Agents need to know what tools are available, and tool discovery can take different forms. Static discovery uses a fixed tool set defined at system design time, providing predictability but limited flexibility. Dynamic discovery finds tools at runtime through a registry, providing adaptability at the cost of additional complexity. Negotiated discovery has agents discover and negotiate tool availability directly, providing maximum flexibility but requiring sophisticated coordination.
Tool Invocation
Tool invocation should be explicit and auditable. Each invocation should include a tool identifier specifying which tool to use, input parameters providing the data the tool needs to operate, an invocation timestamp for tracking and debugging purposes, and expected output format so the calling agent knows how to interpret results.
Tool Error Handling
Tool failures should be handled explicitly through well-defined policies. Define what constitutes tool failure, being specific about which outcomes count as failures versus acceptable variation. Define retry policies specifying how many times to retry and with what backoff strategy. Define fallback options for when a tool is unavailable, providing alternative paths to accomplish the goal. Define escalation paths for persistent failures that cannot be resolved through retries.
Tool use has common pitfalls to avoid. Tool proliferation creates too many tools without clear organization, making it difficult for agents to select the right tool. Tool redundancy has multiple tools doing similar things, creating confusion about which to use. Tool coupling introduces hard-coded tool dependencies that prevent tool substitution or evolution. Tool opacity makes it unclear what tools do or how they work, requiring agents to guess or try tools experimentally.
Output Specification Patterns
Delegated tasks should return outputs in specified formats. Output specifications enable reliable downstream processing.
Output Schema
Define output schema explicitly:
Output Schema for Classification:
{
"category": string, // Exception category
"confidence": number, // 0.0 to 1.0
"reasoning": string, // Explanation
"escalate": boolean // Requires human review
}
Output Validation
Validate outputs against schema before passing to downstream agents. Invalid outputs should trigger retry or escalation.
Key Takeaways
Delegation requires task definition that specifies what needs to be done, context passing that transfers necessary information, output specification that defines expected results, and escalation paths for handling failures. Match tasks to agent capabilities based on explicit capability definitions, and decompose complex tasks into subtasks that can be delegated independently. Pass only necessary context to avoid bloat and noise, carefully filtering what each agent truly needs. Tool use requires discovery patterns so agents find available tools, invocation patterns for explicit and auditable execution, and error handling patterns for managing failures gracefully. Define output schemas explicitly and validate outputs against them before passing results to downstream agents.
Design delegation for a system you are building by working through each component systematically. First, identify the main tasks and how you would decompose them into subtasks that can be delegated independently. Second, determine what agent capabilities are needed and how you would define them in an agent registry. Third, clarify what context each agent needs and how you would pass it, considering which technique fits your needs. Fourth, determine what tools each agent needs and how they will be discovered, whether statically or dynamically. Fifth, define the output schemas and how they will be validated before downstream processing. Sixth, establish escalation paths for failures so agents know when and how to escalate when they cannot proceed.
What's Next
In Section 13.3, we examine Orchestration vs Over-Orchestration, exploring when orchestration helps and when it becomes counterproductive.