TL;DR: The ACP mode main agent spawns sub-agents via the
task()tool and never writes code directly — a pure orchestrator pattern.
Table of contents
Open Table of contents
What is ACP Mode
ACP makes the main agent an architecture lead while sub-agents handle implementation. It follows the supervisor pattern from the LangGraph multi-agent docs, but simplified with a task() tool-based approach.
Main Agent (ACP mode)
|
+-- task("Analyze file structure") --> Sub-agent A
+-- task("Implement API endpoints") --> Sub-agent B
+-- task("Write tests") --> Sub-agent C
|
v
Integrate results and quality review
The task() Tool Implementation
@tool
async def task(description: str, instructions: str = "") -> str:
"""Creates a sub-agent to execute an independent subtask."""
aid = uuid.uuid4().hex[:8]
sub_thread_id = f"{thread_id or 'acp'}-sub-{aid}"
subagents[aid] = {"id": aid, "name": description, "status": "running", "currentTask": prompt[:80]}
# Sub-agents run without HITL
sub_agent = build_agent(
workspace_dir, app_state.checkpointer,
"code", # Sub-agents always use Code mode
sub_thread_id,
with_hitl=False, # HITL disabled
)
result_tokens: list[str] = []
async for chunk in stream_events(sub_agent, {"messages": [{"role": "user", "content": prompt}]}, sub_config, {}):
if out_queue:
data = json.loads(chunk.removeprefix("data: ").strip())
if data.get("type") == "token":
result_tokens.append(data.get("content", ""))
data["source"] = f"sub:{description[:24]}"
await out_queue.put(sse(data))
subagents[aid]["status"] = "done"
return "".join(result_tokens).strip() or f"[{description} complete]"
The LangGraph subgraph streaming docs were the reference for capturing sub-agent events. Key design decisions:
- Code mode fixed: Sub-agents always run in Code mode for direct implementation.
- HITL disabled: The main agent’s
task()call already has user approval. - Independent thread_id: Each sub-agent runs on a separate thread for state isolation.
Stream Merging
Sub-agent SSE events are merged into the main stream with source tags:
data["source"] = f"sub:{description[:24]}"
await out_queue.put(sse(data))
| source value | Meaning |
|---|---|
"main" | Main agent |
"sub:Analyze files" | Sub-agent (name shown) |
Sub-Agent State Tracking
export interface SubAgent {
id: string;
name: string;
status: AgentStatus; // "idle" | "running" | "done" | "error"
currentTask?: string;
}
The frontend Agents panel displays real-time status of all sub-agents, updated via SSE agents events.
Circular Dependency Resolution
The task() tool references stream, agent_core, and state, which depend on each other. Lazy imports break the cycle:
@tool
async def task(description: str, instructions: str = "") -> str:
from stream import sse, stream_events
from agent_core import build_agent
from state import state as app_state
from hitl import thread_output_queues, thread_subagents
Error Handling
If a sub-agent fails, its status updates to “error” and the error message returns to the main agent, which can try alternative strategies.
Benchmark
| Metric | Value |
|---|---|
| Sub-agent creation overhead | ~200ms (excluding LLM call) |
| 3 sub-agents parallel execution total time | ~45 seconds (Claude Sonnet) |
| Sub-agent average ReAct loop iterations | 5-8 |
| Recommended sub-agent count | 3-5 |
| Stream merge source tag overhead | Negligible (~20 bytes/event) |
FAQ
Do sub-agents share files?
Yes. They share the same workspace_dir, so sub-agent A’s output files are readable by sub-agent B. However, concurrent write conflicts are not prevented, so the main agent must decompose tasks into independent units.
How many sub-agents can run?
No hard limit, but each sub-agent makes separate LLM calls. In practice, 3-5 is optimal for cost and speed.
Do sub-agents create plan.md?
No. Sub-agents run in Code mode without plan-based planning. Planning is the main agent’s responsibility.
Series
- DeepCoWork: I Built an AI Agent Desktop App
- Tauri 2 + Python Sidecar
- DeepAgents SDK Internals
- System Prompt Design per Mode
- SSE Streaming Pipeline
- HITL Approval Flow
- [This post] Multi-Agent ACP Mode
- Agent Memory 4 Layers
- Skills System
- LLM Provider Integration
- Security Checklist
- GitHub Actions Cross-Platform Build