MCP Orchestrator
A sophisticated server that coordinates multiple LLMs (Claude, Gemini, etc.) using the Model Context Protocol to enhance reasoning capabilities through strategies like progressive deep dive and consensus-based approaches.
README
MCP Orchestrator
A sophisticated Model Context Protocol (MCP) server that orchestrates external AI models (Gemini 2.5 Pro and O3) to provide additional perspectives and insights when using Claude. The orchestrator exclusively uses external models since users are already interacting with Claude directly.
Architecture Overview
When you interact with Claude, this MCP server provides tools to consult external models for additional perspectives:
- Gemini 2.5 Pro (via OpenRouter): Alternative analysis and perspectives
- O3 (via OpenAI): Architectural and system design insights
Note: The orchestrator does NOT use Claude models since you're already talking to Claude. It exclusively orchestrates external models to enhance your Claude experience.
Features
- External Model Enhancement: Get perspectives from Gemini 2.5 Pro and O3 to supplement Claude's responses
- Network Bridges: REST API (port 5050) and WebSocket (port 8765) for integration with any application
- Advanced Reasoning Strategies: External enhancement and multi-model council approaches
- MCP-Compliant: Full adherence to Model Context Protocol standards
- Secure by Design: Non-root execution, encrypted storage, API key protection
- Docker Support: Production-ready containerization with health checks
- Cost Controls: Built-in request and daily spending limits
- Bug-Free: All known issues fixed including ResponseSynthesizer and lifecycle management
Quick Start
1. Clone and Configure
git clone https://github.com/gramanoid/mcp_orchestrator
cd mcp_orchestrator
# Create .env file with your API keys
cat > .env << EOF
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
EOF
2. Deploy with Docker
# Deploy the service
./scripts/deploy.sh
# Check status
./scripts/deploy.sh status
# View logs
./scripts/deploy.sh logs
3. Start Network Services (Optional)
# Start REST API and WebSocket bridges for network access
./start_network_services.sh
# Test REST API
curl -X POST http://localhost:5050/mcp/get_orchestrator_status
# Test WebSocket (see examples/integration_example.py)
4. Use with MCP Clients
The orchestrator exposes 13 MCP tools that allow Claude to get external perspectives:
orchestrate_task: Get external model perspectives on any taskanalyze_task: Analyze task complexity with external modelsquery_specific_model: Query Gemini 2.5 Pro or O3 directlycode_review: Get external code review perspectivesthink_deeper: Request deeper analysis from external modelsmulti_model_review: Get multiple external perspectivescomparative_analysis: Compare solutions using external models- And more tools for specific use cases
Architecture
┌──────────┐ ┌─────────────┐ ┌──────────────┐ ┌──────────────┐
│ User │────▶│ Claude │────▶│MCP Orchestra │────▶│External Models│
└──────────┘ │ (You) │ │ tor │ │Gemini 2.5 Pro│
└─────────────┘ └──────────────┘ │ O3 │
│ ▲ └──────────────┘
│ │
└────────────────────┘
MCP Tools Usage
The flow:
- User asks Claude a question
- Claude responds directly (primary interaction)
- Claude can optionally use MCP tools to get external perspectives
- MCP Orchestrator queries ONLY external models (Gemini 2.5 Pro and/or O3)
- External insights are integrated into Claude's response
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
OPENROUTER_API_KEY |
Your OpenRouter API key (for Gemini 2.5 Pro) | Required |
OPENAI_API_KEY |
Your OpenAI API key (for O3) | Required |
MCP_LOG_LEVEL |
Logging level | INFO |
MCP_MAX_COST_PER_REQUEST |
Max cost per request ($) | 5.0 |
MCP_DAILY_LIMIT |
Daily spending limit ($) | 100.0 |
Strategy Configuration
Edit config/config.yaml to customize:
models:
gemini_pro:
provider: openrouter
model_id: google/gemini-2.5-pro-preview
max_tokens: 32768
temperature: 0.7
o3_architect:
provider: openai
model_id: o3
max_tokens: 16384
temperature: 0.8
strategies:
external_enhancement:
models:
- gemini_pro
- o3_architect
max_quality_council:
models:
- gemini_pro
- o3_architect
require_consensus: true
Integration Options
REST API
import requests
response = requests.post('http://localhost:5050/mcp/orchestrate_task',
json={
"description": "Analyze this architecture decision",
"strategy": "external_enhancement"
}
)
print(response.json()['result'])
WebSocket
const ws = new WebSocket('ws://localhost:8765');
ws.send(JSON.stringify({
method: 'query_specific_model',
params: {
model: 'gemini_pro',
description: 'What are React best practices?'
}
}));
See INTEGRATION_EXAMPLES.md for more examples in various languages.
Development
Local Setup
# Create virtual environment
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
# Install dependencies
pip install -r requirements.txt
# Run tests
pytest tests/
# Run locally
python -m src.mcp_server
Testing with Client
# See scripts/mcp-client.py for example usage
python scripts/mcp-client.py
Security
- Runs as non-root user in containers
- Read-only filesystem with specific writable volumes
- Encrypted credential storage
- No capabilities beyond essentials
- Resource limits enforced
Monitoring
- JSON structured logging
- Health checks every 30s
- Log rotation (3 files, 10MB each)
- Cost tracking and limits
Troubleshooting
Container won't start
# Check logs
docker-compose logs
# Verify environment
docker-compose config
API errors
- Verify API key in
.env - Check rate limits and quotas
- Review logs for specific errors
Memory issues
- Adjust
mem_limitin docker-compose.yml - Monitor with
docker stats
License
MIT License - see LICENSE file for details
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。