docs/Cookbook/Agent Swarm

Agent Swarm

An orchestrator distributes tasks across a pool of workers and collects results.

The most common multi-agent pattern: one orchestrator, many workers. The orchestrator fans out work via RPC, workers process in parallel, results flow back.

Architecture#

orchestrator.relaydistributes tasksworker-1.relayRPCworker-2.relayRPCworker-3.relayRPC

Workers (TypeScript)#

Each worker connects via the WebSocket bridge, registers a name, and handles RPC calls:

import WebSocket from "ws";
 
const WORKER_ID = process.env.WORKER_ID ?? "1";
const ws = new WebSocket("ws://localhost:9002/ws");
 
ws.on("open", () => {
  ws.send(JSON.stringify({
    type: "register",
    name: `worker-${WORKER_ID}.relay`,
  }));
  console.log(`worker-${WORKER_ID}.relay ready`);
});
 
ws.on("message", async (data) => {
  const msg = JSON.parse(data.toString());
 
  // Handle RPC calls
  if (msg.type === "inbound_call" && msg.method === "process") {
    const task = JSON.parse(msg.payload);
    console.log(`processing task ${task.id}`);
 
    const result = await doWork(task);
 
    // Send RPC response
    ws.send(JSON.stringify({
      type: "call_response",
      correlation_id: msg.correlation_id,
      payload: JSON.stringify({
        taskId: task.id,
        result,
        worker: `worker-${WORKER_ID}.relay`,
      }),
    }));
  }
});
 
async function doWork(task: { id: string; data: string }) {
  await new Promise((r) => setTimeout(r, 1000 + Math.random() * 2000));
  return `processed: ${task.data}`;
}

Orchestrator (TypeScript)#

The orchestrator distributes tasks via the REST bridge for simplicity:

const RELAY = "http://localhost:9002";
 
const workers = ["worker-1.relay", "worker-2.relay", "worker-3.relay"];
 
const tasks = Array.from({ length: 12 }, (_, i) => ({
  id: `task-${i}`,
  data: `document-${i}`,
}));
 
async function runSwarm() {
  const promises = tasks.map((task, i) => {
    const worker = workers[i % workers.length];
    console.log(`assigning ${task.id} → ${worker}`);
 
    return fetch(`${RELAY}/v1/call`, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        to: worker,
        method: "process",
        payload: JSON.stringify(task),
      }),
    }).then((r) => r.json());
  });
 
  const results = await Promise.allSettled(promises);
 
  const succeeded = results.filter((r) => r.status === "fulfilled");
  const failed = results.filter((r) => r.status === "rejected");
 
  console.log(`\n--- Results ---`);
  console.log(`completed: ${succeeded.length}/${tasks.length}`);
  console.log(`failed: ${failed.length}/${tasks.length}`);
 
  for (const r of succeeded) {
    if (r.status === "fulfilled") {
      console.log(r.value);
    }
  }
}
 
runSwarm();

Running it#

# Terminal 1-3: start workers (WebSocket agents)
WORKER_ID=1 npx tsx worker.ts
WORKER_ID=2 npx tsx worker.ts
WORKER_ID=3 npx tsx worker.ts
 
# Terminal 4: run orchestrator (REST calls)
npx tsx orchestrator.ts

Scaling up#

Dynamic worker discovery. Instead of hardcoding workers, have them announce themselves on a topic:

// Worker: announce on connect via REST
await fetch(`${RELAY}/v1/broadcast`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    topic: "swarm.announce",
    payload: JSON.stringify({
      name: `worker-${WORKER_ID}.relay`,
      capabilities: ["process", "summarize"],
    }),
  }),
});
 
// Orchestrator: listen for announcements via SSE
const workers: string[] = [];
const events = new EventSource(`${RELAY}/v1/subscribe?topic=swarm.announce`);
 
events.onmessage = (e) => {
  const info = JSON.parse(e.data);
  if (!workers.includes(info.name)) {
    workers.push(info.name);
    console.log(`discovered worker: ${info.name}`);
  }
};

Health-aware routing. Workers broadcast their load. The orchestrator picks the least busy:

// Worker: report load periodically via REST
setInterval(async () => {
  await fetch(`${RELAY}/v1/broadcast`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      topic: "swarm.health",
      payload: JSON.stringify({
        name: `worker-${WORKER_ID}.relay`,
        activeTasks: currentLoad,
        maxTasks: 10,
      }),
    }),
  });
}, 5000);
 
// Orchestrator: track worker health via SSE
const workerLoad = new Map<string, number>();
const health = new EventSource(`${RELAY}/v1/subscribe?topic=swarm.health`);
 
health.onmessage = (e) => {
  const { name, activeTasks } = JSON.parse(e.data);
  workerLoad.set(name, activeTasks);
};
 
function pickWorker(): string {
  return [...workerLoad.entries()]
    .sort(([, a], [, b]) => a - b)[0][0];
}

When to use this pattern#

✅ Embarrassingly parallel work (document processing, data transformation, batch inference) ✅ Variable-size worker pools (scale up/down by starting/stopping workers) ✅ Work that needs to survive individual worker failures (retry on different worker)

❌ Work with strict ordering requirements (use a pipeline instead) ❌ Tasks that need shared state (use RPC to a coordinator)