docs/Cookbook/Tool Mesh

Tool Mesh

Agents advertise capabilities on the mesh. Others discover and call them dynamically.

Instead of hardcoding which agent does what, let agents advertise their capabilities and discover each other at runtime. An LLM agent can find a "web search" tool agent, a "code executor" agent, or a "database" agent — all without knowing their names in advance.

Architecture#

search.relayexecute.relaydatabase.relaytools.advertisepub/sub topicplanner.relaysubscribe + routebroadcastsubscribeRPC (on-demand)

Tool agent (TypeScript)#

Each tool agent connects via WebSocket, advertises its capabilities, and handles calls:

import WebSocket from "ws";
 
const RELAY = "http://localhost:9002";
const ws = new WebSocket("ws://localhost:9002/ws");
 
ws.on("open", () => {
  // Register on the mesh
  ws.send(JSON.stringify({ type: "register", name: "search.relay" }));
 
  // Advertise capabilities
  const capabilities = {
    name: "search.relay",
    tools: [
      {
        name: "web_search",
        description: "Search the web and return results",
        parameters: { query: "string", maxResults: "number" },
      },
    ],
  };
 
  ws.send(JSON.stringify({
    type: "broadcast",
    topic: "tools.advertise",
    payload: JSON.stringify(capabilities),
  }));
 
  // Re-advertise every 30s (covers new agents joining)
  setInterval(() => {
    ws.send(JSON.stringify({
      type: "broadcast",
      topic: "tools.advertise",
      payload: JSON.stringify(capabilities),
    }));
  }, 30_000);
});
 
ws.on("message", async (data) => {
  const msg = JSON.parse(data.toString());
 
  // Handle RPC calls for our tool
  if (msg.type === "inbound_call" && msg.method === "web_search") {
    const { query, maxResults = 5 } = JSON.parse(msg.payload);
    console.log(`searching: "${query}"`);
 
    const results = await searchWeb(query, maxResults);
 
    ws.send(JSON.stringify({
      type: "call_response",
      correlation_id: msg.correlation_id,
      payload: JSON.stringify(results),
    }));
  }
});

Planner agent (TypeScript)#

The planner subscribes to tool advertisements and routes work via the REST bridge:

import WebSocket from "ws";
 
const RELAY = "http://localhost:9002";
const ws = new WebSocket("ws://localhost:9002/ws");
 
interface ToolInfo {
  agent: string;
  name: string;
  description: string;
  parameters: Record<string, string>;
}
 
const toolRegistry = new Map<string, ToolInfo>();
 
ws.on("open", () => {
  ws.send(JSON.stringify({ type: "register", name: "planner.relay" }));
  ws.send(JSON.stringify({ type: "subscribe", topic: "tools.advertise" }));
});
 
ws.on("message", (data) => {
  const msg = JSON.parse(data.toString());
 
  if (msg.type === "broadcast_message" && msg.topic === "tools.advertise") {
    const { name: agent, tools } = JSON.parse(msg.payload);
    for (const tool of tools) {
      toolRegistry.set(tool.name, { agent, ...tool });
      console.log(`discovered tool: ${tool.name} → ${agent}`);
    }
  }
});
 
// Use discovered tools via REST
async function useTool(toolName: string, params: unknown): Promise<unknown> {
  const tool = toolRegistry.get(toolName);
  if (!tool) throw new Error(`unknown tool: ${toolName}`);
 
  console.log(`calling ${toolName} on ${tool.agent}`);
  const res = await fetch(`${RELAY}/v1/call`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      to: tool.agent,
      method: toolName,
      payload: JSON.stringify(params),
    }),
  });
  return res.json();
}
 
// Example: an LLM decides to search the web
const results = await useTool("web_search", {
  query: "peer-to-peer networking for AI agents",
  maxResults: 3,
});
 
console.log("search results:", results);

Adding more tools#

Each new tool agent just needs to advertise and handle its methods. Here's a code executor:

// Capabilities to advertise
const capabilities = {
  name: "execute.relay",
  tools: [
    {
      name: "run_python",
      description: "Execute Python code in a sandbox",
      parameters: { code: "string", timeout_ms: "number" },
    },
    {
      name: "run_shell",
      description: "Execute a shell command",
      parameters: { command: "string" },
    },
  ],
};
 
// Handle calls in your ws.on("message") handler:
if (msg.method === "run_python") {
  const { code, timeout_ms = 10_000 } = JSON.parse(msg.payload);
  const result = await executePython(code, timeout_ms);
  ws.send(JSON.stringify({
    type: "call_response",
    correlation_id: msg.correlation_id,
    payload: JSON.stringify(result),
  }));
}

The planner discovers these automatically — no configuration changes needed.

MCP-style tool format#

If you want your tool advertisements to follow the Model Context Protocol tool format for LLM compatibility:

const capabilities = {
  name: "search.relay",
  tools: [
    {
      name: "web_search",
      description: "Search the web and return results",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search query" },
          maxResults: { type: "number", description: "Max results to return" },
        },
        required: ["query"],
      },
    },
  ],
};

This makes it easy for LLM-powered planners to generate valid tool calls.

When to use this pattern#

✅ LLM agents that need to discover and use tools dynamically ✅ Systems where tools come and go (microservices, edge devices) ✅ Building a tool ecosystem where new capabilities can be added without redeployment ✅ MCP-like tool routing over a decentralized network

❌ Static tool sets where you know every tool at compile time (just hardcode them) ❌ Tools that require sub-millisecond latency (network hop adds latency)