Orchestration MCP
A TypeScript MCP server for launching, tracking, and managing external coding-agent runs across local and remote backends like Codex and Claude Code. It allows top-level agents to orchestrate subagents through tools for spawning tasks, polling events, and handling interactive sessions.
README
Orchestration MCP
TypeScript MCP server for launching and tracking external coding-agent runs.
The MCP surface stays stable while the internal execution backend can target:
- local
codex - local
claude_code - remote
remote_a2a
This lets a top-level agent call one MCP toolset while the orchestration layer decides whether subagents are local SDK processes or remote A2A-compatible agents.
Install And Build
cd orchestration-mcp
npm install
npm run build
Run The MCP Server
cd orchestration-mcp
npm start
This starts the MCP server from dist/index.js.
Codex MCP Config Example
If you want Codex to load this MCP server, add an entry like this to ~/.codex/config.toml:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/abs/path/to/orchestration-mcp/dist/index.js"]
enabled = true
Example using this repository path:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/Users/fonsh/PycharmProjects/Treer/nanobot/orchestration-mcp/dist/index.js"]
enabled = true
After updating the config, restart Codex so it reloads MCP servers.
What The MCP Exposes
The server registers these tools:
spawn_runget_runpoll_eventscancel_runcontinue_runlist_runsget_event_artifact
Typical MCP Flow
- Call
spawn_runto create a subagent run. - Call
poll_eventsuntil you see a terminal event or a waiting state. - If the run enters
input_requiredorauth_required, callcontinue_run. - Call
get_runfor the latest run summary. - If an event contains
artifact_refs, callget_event_artifactto fetch the full payload.
spawn_run notes
backend:"codex","claude_code", or"remote_a2a"role: orchestration role label such asplanner,worker, orreviewerprompt: plain-text instruction for simple runsinput_message: optional structured message for multipart/A2A-style inputscwd: absolute working directorysession_mode:neworresumesession_id: required when resuming a prior sessionprofile: optional path to a persona/job-description file for future profile-driven behavior
Unless you are explicitly instructed to use a profile, leave profile empty.
output_schema: optional JSON Schema for structured final outputmetadata: optional orchestration metadata stored for correlation and auditingbackend_config: optional backend-specific settings. Forremote_a2a, setagent_urland any auth headers/tokens here.
For all backends, cwd is the orchestration-side working directory used for run/session storage.
For remote_a2a, spawn_run.cwd is also forwarded to the remote subagent and becomes that A2A task context's execution directory.
At least one of prompt or input_message is required.
Simple example:
{
"backend": "codex",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new"
}
Remote A2A example:
{
"backend": "remote_a2a",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new",
"backend_config": {
"agent_url": "http://127.0.0.1:53552"
}
}
continue_run notes
Use continue_run when a run enters input_required or auth_required and the backend supports interactive continuation.
Inputs:
run_idinput_message
get_event_artifact notes
Use get_event_artifact when a sanitized event returned by poll_events contains event.data.artifact_refs and you need the full original payload.
Inputs:
run_idseqfield_path: JSON Pointer relative toevent.data, for example/stdout,/raw_tool_use_result, or/input/contentoffset: optional byte offset, default0limit: optional byte limit, default65536
Typical flow:
- Call
poll_events. - Inspect
event.data.artifact_refson any sanitized event. - Call
get_event_artifactwith the samerun_id, the eventseq, and one of the exposedfield_pathvalues.
Backend defaults
codex: uses the current@openai/codex-sdkdefaults plus non-interactive execution settings already wired in the adapterclaude_code: uses@anthropic-ai/claude-agent-sdkwithpermissionMode: "bypassPermissions"so the MCP call stays non-blocking, and reuses persisted backend session ids forresumeremote_a2a: connects to a remote A2A-compatible agent using@a2a-js/sdk, streams task updates into normalized orchestration events, and supportscontinue_runforinput_required
For claude_code, make sure the local environment already has a working Claude Code authentication setup before testing.
Test A2A agents
The repo includes helper modules for local A2A-wrapped test agents:
dist/test-agents/codex-a2a-agent.jsdist/test-agents/claude-a2a-agent.jsdist/test-agents/start-a2a-agent.js
These export startup helpers that wrap the local Codex and Claude SDKs behind an A2A server so the orchestration MCP can test its internal remote_a2a backend against realistic subagents.
To start an interactive wrapper launcher:
npm run start:a2a-agent
The script will ask whether to wrap codex or claude_code.
After startup, it prints the agent_url and a ready-to-use spawn_run payload for the MCP layer. The wrapper no longer locks a working directory at startup. Each remote_a2a call uses the cwd provided to spawn_run, and the wrapper keeps that cwd fixed for the lifetime of the same A2A contextId.
Storage
Run data is stored under:
<cwd>/.nanobot-orchestrator/
runs/
<run_id>/
run.json
events.jsonl
result.json
artifacts/
000008-command_finished/
manifest.json
stdout.0001.txt
stdout.0002.txt
sessions/
<session_id>.json
Notes:
events.jsonlstores sanitized events intended forpoll_eventsconsumption.- Oversized raw payloads are moved into per-event artifact files and referenced from
event.data.artifact_refs. run.jsonandresult.jsonkeep the current run snapshot and final result behavior.- The storage directory name is currently
.nanobot-orchestrator/for backward compatibility with the existing implementation.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。