Orchestration MCP

Orchestration MCP

A TypeScript MCP server for launching, tracking, and managing external coding-agent runs across local and remote backends like Codex and Claude Code. It allows top-level agents to orchestrate subagents through tools for spawning tasks, polling events, and handling interactive sessions.

Category
访问服务器

README

Orchestration MCP

TypeScript MCP server for launching and tracking external coding-agent runs.

The MCP surface stays stable while the internal execution backend can target:

  • local codex
  • local claude_code
  • remote remote_a2a

This lets a top-level agent call one MCP toolset while the orchestration layer decides whether subagents are local SDK processes or remote A2A-compatible agents.

Install And Build

cd orchestration-mcp
npm install
npm run build

Run The MCP Server

cd orchestration-mcp
npm start

This starts the MCP server from dist/index.js.

Codex MCP Config Example

If you want Codex to load this MCP server, add an entry like this to ~/.codex/config.toml:

[mcp_servers.orchestration-mcp]
command = "node"
args = ["/abs/path/to/orchestration-mcp/dist/index.js"]
enabled = true

Example using this repository path:

[mcp_servers.orchestration-mcp]
command = "node"
args = ["/Users/fonsh/PycharmProjects/Treer/nanobot/orchestration-mcp/dist/index.js"]
enabled = true

After updating the config, restart Codex so it reloads MCP servers.

What The MCP Exposes

The server registers these tools:

  • spawn_run
  • get_run
  • poll_events
  • cancel_run
  • continue_run
  • list_runs
  • get_event_artifact

Typical MCP Flow

  1. Call spawn_run to create a subagent run.
  2. Call poll_events until you see a terminal event or a waiting state.
  3. If the run enters input_required or auth_required, call continue_run.
  4. Call get_run for the latest run summary.
  5. If an event contains artifact_refs, call get_event_artifact to fetch the full payload.

spawn_run notes

  • backend: "codex", "claude_code", or "remote_a2a"
  • role: orchestration role label such as planner, worker, or reviewer
  • prompt: plain-text instruction for simple runs
  • input_message: optional structured message for multipart/A2A-style inputs
  • cwd: absolute working directory
  • session_mode: new or resume
  • session_id: required when resuming a prior session
  • profile: optional path to a persona/job-description file for future profile-driven behavior

Unless you are explicitly instructed to use a profile, leave profile empty.

  • output_schema: optional JSON Schema for structured final output
  • metadata: optional orchestration metadata stored for correlation and auditing
  • backend_config: optional backend-specific settings. For remote_a2a, set agent_url and any auth headers/tokens here.

For all backends, cwd is the orchestration-side working directory used for run/session storage.

For remote_a2a, spawn_run.cwd is also forwarded to the remote subagent and becomes that A2A task context's execution directory.

At least one of prompt or input_message is required.

Simple example:

{
  "backend": "codex",
  "role": "worker",
  "prompt": "Inspect the repository and summarize the architecture.",
  "cwd": "/abs/path/to/project",
  "session_mode": "new"
}

Remote A2A example:

{
  "backend": "remote_a2a",
  "role": "worker",
  "prompt": "Inspect the repository and summarize the architecture.",
  "cwd": "/abs/path/to/project",
  "session_mode": "new",
  "backend_config": {
    "agent_url": "http://127.0.0.1:53552"
  }
}

continue_run notes

Use continue_run when a run enters input_required or auth_required and the backend supports interactive continuation.

Inputs:

  • run_id
  • input_message

get_event_artifact notes

Use get_event_artifact when a sanitized event returned by poll_events contains event.data.artifact_refs and you need the full original payload.

Inputs:

  • run_id
  • seq
  • field_path: JSON Pointer relative to event.data, for example /stdout, /raw_tool_use_result, or /input/content
  • offset: optional byte offset, default 0
  • limit: optional byte limit, default 65536

Typical flow:

  1. Call poll_events.
  2. Inspect event.data.artifact_refs on any sanitized event.
  3. Call get_event_artifact with the same run_id, the event seq, and one of the exposed field_path values.

Backend defaults

  • codex: uses the current @openai/codex-sdk defaults plus non-interactive execution settings already wired in the adapter
  • claude_code: uses @anthropic-ai/claude-agent-sdk with permissionMode: "bypassPermissions" so the MCP call stays non-blocking, and reuses persisted backend session ids for resume
  • remote_a2a: connects to a remote A2A-compatible agent using @a2a-js/sdk, streams task updates into normalized orchestration events, and supports continue_run for input_required

For claude_code, make sure the local environment already has a working Claude Code authentication setup before testing.

Test A2A agents

The repo includes helper modules for local A2A-wrapped test agents:

  • dist/test-agents/codex-a2a-agent.js
  • dist/test-agents/claude-a2a-agent.js
  • dist/test-agents/start-a2a-agent.js

These export startup helpers that wrap the local Codex and Claude SDKs behind an A2A server so the orchestration MCP can test its internal remote_a2a backend against realistic subagents.

To start an interactive wrapper launcher:

npm run start:a2a-agent

The script will ask whether to wrap codex or claude_code.

After startup, it prints the agent_url and a ready-to-use spawn_run payload for the MCP layer. The wrapper no longer locks a working directory at startup. Each remote_a2a call uses the cwd provided to spawn_run, and the wrapper keeps that cwd fixed for the lifetime of the same A2A contextId.

Storage

Run data is stored under:

<cwd>/.nanobot-orchestrator/
  runs/
    <run_id>/
      run.json
      events.jsonl
      result.json
      artifacts/
        000008-command_finished/
          manifest.json
          stdout.0001.txt
          stdout.0002.txt
  sessions/
    <session_id>.json

Notes:

  • events.jsonl stores sanitized events intended for poll_events consumption.
  • Oversized raw payloads are moved into per-event artifact files and referenced from event.data.artifact_refs.
  • run.json and result.json keep the current run snapshot and final result behavior.
  • The storage directory name is currently .nanobot-orchestrator/ for backward compatibility with the existing implementation.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选