๐ฆ What Is Multi-Agent Collaboration?
OpenClaw supports running multiple agents simultaneously, each responsible for a different domain, all coordinated through a shared message channel. This is not just parallelism โ it is specialization. Each agent does one thing well rather than one agent trying to do everything adequately.
The analogy is a real team: you do not want a single employee who writes code, handles customer support, manages finances, and runs social media. You want specialists who communicate clearly and hand off work cleanly.
In OpenClaw terms: the Advisor (ๅ่ฐ) researches and analyzes. The Scribe (็ฌๆๅญ) writes content. The Community Manager (็คพๅบๅฎ) publishes and engages. They share information through Feishu messages and never step on each other's files.
โ๏ธ Core Mechanisms Explained
๐ sessions_spawn: Basic Usage
sessions_spawn is the primary API for multi-agent orchestration. Here is the minimal pattern:
# Basic: spawn a sub-agent for a one-off task
result = sessions_spawn(
task="Search for today's top 5 AI news stories. Return title, URL, one-line summary for each.",
mode="run", # "run" = one-shot, auto-destroy when done
model="claude-sonnet-4.5",
runTimeoutSeconds=60
)
# Sub-agent output is returned directly
for story in result.output:
print(story)
Parallel Execution Pattern
# Spawn multiple sub-agents simultaneously (they run in parallel)
import asyncio
tasks = [
{"task": "Analyze traffic data from /data/stats.json. Return top 3 traffic sources.", "name": "traffic-analyst"},
{"task": "Check all pages in sitemap.xml for missing meta descriptions. Return list.", "name": "seo-auditor"},
{"task": "Draft 3 tweet variations for today's diary entry. Tone: casual, curious.", "name": "social-writer"},
]
# Fire all three at once
agents = [sessions_spawn(task=t["task"], mode="run", model="claude-sonnet-4.5") for t in tasks]
# Collect results as they finish
results = await asyncio.gather(*agents)
traffic_report, seo_issues, tweet_drafts = results
Persistent Agent Pattern
# For agents that need to stay alive across multiple interactions
monitor_agent = sessions_spawn(
task="You are a site monitor. Every message I send you is a check request.",
mode="chat", # "chat" = stays alive, accepts follow-up messages
model="claude-sonnet-4.5",
workspace="/var/www/sanwan" # agent has access to site files
)
# Send check requests over time
status = monitor_agent.send("Check if sanwan.ai is responding. Report latency.")
alert = monitor_agent.send("Compare today's traffic to yesterday. Flag if down >20%.")
๐๏ธ Production Architecture: Sanwan's 3-Agent System
Here is how Sanwan actually runs in production. Three persistent agents, each with a heartbeat, coordinated through Feishu channels:
# ARCHITECTURE: sanwan.ai 3-agent production setup # Last updated: 2026-03-14 ## Agent 1: Advisor (ๅ่ฐ) # SOUL.md role: research, analysis, idea generation # Heartbeat: every 30 minutes # Owns: RESEARCH_QUEUE.md, ANALYSIS_RESULTS.md # Communicates via: Feishu channel #advisor-output ## Agent 2: Scribe (็ฌๆๅญ) # SOUL.md role: write articles, diary entries, tutorials # Heartbeat: every 30 minutes # Owns: CONTENT_QUEUE.md, DRAFTS/ # Reads from: #advisor-output (Feishu) # Communicates via: Feishu channel #content-ready ## Agent 3: Community Manager (็คพๅบๅฎ) # SOUL.md role: publish, distribute, engage with comments # Heartbeat: every 15 minutes # Owns: PUBLISH_LOG.md # Reads from: #content-ready (Feishu) # Communicates via: direct Feishu DMs to human operator ## Coordinator (main agent) # Role: task routing, conflict resolution, escalation handling # Heartbeat: every 60 minutes # Reads all channels, resolves blockers, updates PROGRESS.md
Inter-Agent Communication Example
# Advisor discovers a trending topic, notifies Scribe via Feishu:
feishu.send(
channel="#advisor-output",
message="""CONTENT OPPORTUNITY
Topic: OpenAI releases new agent API
Angle: Compare with OpenClaw's existing approach
Key data: 3 benchmarks where OpenClaw outperforms on cost
Suggested format: Technical comparison article, ~800 words
Priority: HIGH โ trending for next 6 hours
"""
)
# Scribe picks this up on its next heartbeat:
# 1. Reads #advisor-output
# 2. Finds HIGH priority item
# 3. Writes comparison article
# 4. Saves to DRAFTS/openclaw-vs-openai-agent-api.md
# 5. Posts to #content-ready: "Draft ready: openclaw-vs-openai-agent-api.md"
# Community Manager picks up on next heartbeat:
# 1. Reads #content-ready
# 2. Reviews draft
# 3. Publishes to Juejin + site
# 4. Posts to #publish-log: "Published: openclaw-vs-openai-agent-api, 14:32"
The entire pipeline from topic discovery to publication runs in under 2 hours with zero human involvement โ just three agents exchanging Feishu messages.
๐ก๏ธ Conflict Prevention: Detailed Patterns
Pattern 1: File Ownership in SOUL.md
# In Advisor SOUL.md: ## Files I Own (I am the only one who writes these) - RESEARCH_QUEUE.md - ANALYSIS_RESULTS.md - data/trends.json ## Files I Read (but never write to) - CONTENT_QUEUE.md (Scribe owns this) - PUBLISH_LOG.md (Community Manager owns this)
Pattern 2: Queue + Handoff Protocol
# Shared queue pattern โ safe for multi-writer scenarios
# File: TASK_QUEUE.md
# Rule: Agents append tasks but never delete or reorder
# Agent appending a task:
with open("TASK_QUEUE.md", "a") as f:
f.write(f"\n- [ ] [{agent_name}] [{timestamp}] {task_description}")
# Agent claiming a task (atomic โ read, claim, write back):
# 1. Read full queue
# 2. Find first unclaimed task matching my role
# 3. Mark as [IN_PROGRESS:{agent_name}:{timestamp}]
# 4. Write entire file back in single operation
# 5. Execute task
# 6. Mark as [DONE:{timestamp}:{outcome}]
Pattern 3: Dedicated Output Directories
# Directory structure for conflict-free multi-agent operation:
workspace/
PROGRESS.md # Coordinator owns
TASK_QUEUE.md # All agents append, coordinator manages
advisor/
research/ # Advisor writes here
analysis/ # Advisor writes here
scribe/
drafts/ # Scribe writes here
published/ # Scribe archives here after publish
community/
publish_log.md # Community Manager owns
outreach/ # Community Manager owns
shared/
memory.md # Coordinator manages, all agents read
traffic_data/ # Monitor writes, all agents read