
Orchestrator MCP
An intelligent MCP server that orchestrates multiple MCP servers with AI-enhanced workflow automation and production-ready context engine capabilities for codebase analysis.
README
Orchestrator MCP
An intelligent MCP (Model Context Protocol) server that orchestrates multiple MCP servers and provides AI-enhanced workflow automation with production-ready context engine capabilities.
🌟 Features
Core Orchestration
- Multi-Server Orchestration: Connect to multiple MCP servers simultaneously
- Universal Compatibility: Works with npm, uvx, Python, and other MCP server types
- Server Management: Dynamic server discovery and health monitoring
- Scalable Architecture: Easy to add new servers and capabilities
🧠 AI Enhancement Layer
- Intelligent Tool Routing: AI analyzes requests and selects optimal tools
- Workflow Automation: Multi-step processes orchestrated automatically
- Intent Understanding: Natural language request analysis and planning
- Context Synthesis: Combines results from multiple tools into coherent responses
- Result Enhancement: AI improves and formats outputs for better user experience
🎯 Context Engine (PRODUCTION READY!)
- Large Context Analysis: Process 50K+ characters using Gemini's 1M+ token context
- Intelligent Code Understanding: AI-powered codebase analysis with 95% confidence
- Real-time File Discovery: Dynamic file loading and relationship mapping
- Quality Assessment: Identify placeholder vs real implementations
- Performance Optimized: 30s execution time for complex analysis
Built-in Capabilities
- Web Fetching: HTTP content retrieval capabilities
- Fallback Mode: Graceful degradation when AI is not available
Current Status
🎉 PRODUCTION READY - Context Engine Complete!
✅ Context Engine: 85.7% quality score, 95% analysis confidence ✅ AI Enhancement Layer: Complete with intelligent routing and workflow automation ✅ Multi-Server Orchestration: 10/10 MCP servers connected and functional
🚀 Quick Start
-
Install dependencies:
npm install
-
Build the project:
npm run build
-
Configure in your MCP client (e.g., Claude Desktop, VS Code):
See the example configuration files in the
examples/
directory:examples/claude-desktop-config.json
- For Claude Desktopexamples/vscode-mcp.json
- For VS Code
-
Start using the orchestrator through your MCP client!
MCP Integration
For Stdio MCP Server:
- Name:
Orchestrator MCP
- Command:
node
- Arguments:
/path/to/orchestrator-mcp/dist/index.js
For Development:
- Command:
npx
- Arguments:
orchestrator-mcp
(after publishing to npm)
🛠️ Available Tools
Core AI Enhancement Tools
The orchestrator exposes a minimal set of tools focused on unique capabilities that enhance AI assistants:
ai_process
- Primary Interface - Process requests using AI orchestration with intelligent tool selectionget_info
- System introspection - Get information about connected servers and available capabilitiesai_status
- Health monitoring - Get the status of AI orchestration capabilities
Connected Server Tools
All tools from connected MCP servers are automatically available through AI orchestration:
- Filesystem operations (read, write, search files)
- Git operations (repository management, status, history)
- Memory system (knowledge graph storage)
- Web fetching (fetch and process web content)
- Web search (DuckDuckGo search for current information)
- Browser automation (Playwright for web testing)
- Security analysis (Semgrep for vulnerability scanning)
- And more...
🔗 Connected Servers
Currently configured servers:
- filesystem (npm) - File operations with secure access controls
- sequential-thinking (npm) - Dynamic problem-solving through thought sequences
- git (uvx) - Git repository tools and operations
- memory (npm) - Knowledge graph-based persistent memory
- fetch (uvx) - Enhanced web content fetching
- github (npm) - Official GitHub API integration (requires GITHUB_TOKEN)
- playwright (npm) - Browser automation and web testing
- puppeteer (npm) - Alternative browser automation
🤖 AI Configuration
To enable AI features, you need an OpenRouter API key. Additional API keys can be configured for enhanced integrations:
-
Required for AI features: Get an API key from OpenRouter
-
Optional integrations:
- GitHub Personal Access Token for GitHub server integration
- Semgrep App Token for enhanced security scanning
-
Configure the API keys in your MCP client settings:
For Claude Desktop (
~/.claude_desktop_config.json
):{ "mcpServers": { "Orchestrator MCP": { "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "your_api_key_here", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
For VS Code (
.vscode/mcp.json
):{ "inputs": [ { "type": "promptString", "id": "openrouter-key", "description": "OpenRouter API Key", "password": true } ], "servers": { "Orchestrator MCP": { "type": "stdio", "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "${input:openrouter-key}", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
AI Models Supported
The orchestrator works with any model available on OpenRouter, including:
- Anthropic Claude (recommended)
- OpenAI GPT models
- Meta Llama models
- Google Gemini models
- And many more!
📖 Usage Examples
🎯 Context Engine (Production Ready!)
# Intelligent Codebase Analysis
{"tool": "ai_process", "arguments": {"request": "Analyze the current intelligence layer implementation. Show me what's actually implemented vs placeholder code"}}
{"tool": "ai_process", "arguments": {"request": "Find all quality assessment code and identify which parts are real vs mock implementations"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the context management capabilities and identify gaps in the current implementation"}}
# Large Context Code Understanding
{"tool": "ai_process", "arguments": {"request": "Load the entire src/intelligence directory and provide a comprehensive analysis of the architecture"}}
{"tool": "ai_process", "arguments": {"request": "Analyze relationships between context engine, AI workflows, and orchestrator components"}}
{"tool": "ai_process", "arguments": {"request": "Identify all placeholder implementations across the codebase and prioritize which to implement first"}}
Primary AI Interface
# Code Analysis & Development
{"tool": "ai_process", "arguments": {"request": "Find all TypeScript files with TODO comments and create a summary report"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the codebase architecture and identify potential improvements"}}
{"tool": "ai_process", "arguments": {"request": "Check git status, review recent commits, and summarize changes since last week"}}
# Research & Information Gathering
{"tool": "ai_process", "arguments": {"request": "Search for Next.js 15 new features and create a comparison with version 14"}}
{"tool": "ai_process", "arguments": {"request": "Fetch the latest TypeScript 5.3 release notes and extract breaking changes"}}
{"tool": "ai_process", "arguments": {"request": "Research React Server Components best practices and save key insights to memory"}}
# Security & Quality Analysis
{"tool": "ai_process", "arguments": {"request": "Run security analysis on all JavaScript files and prioritize vulnerabilities by severity"}}
{"tool": "ai_process", "arguments": {"request": "Analyze code quality across the project and generate improvement recommendations"}}
# Complex Multi-Step Workflows
{"tool": "ai_process", "arguments": {"request": "Search for React testing best practices, analyze our current test files, and suggest specific improvements"}}
{"tool": "ai_process", "arguments": {"request": "Fetch competitor documentation, compare with our API design, and identify feature gaps"}}
System Introspection
# Get server information and capabilities
{"tool": "get_info", "arguments": {}}
# Check AI orchestration health
{"tool": "ai_status", "arguments": {}}
AI-Enhanced Workflows
The ai_process
tool can handle complex requests like:
- "Analyze my project structure and suggest improvements"
- "Find recent commits and create a summary"
- "Search for TODO comments and organize them by priority"
- "Take a screenshot of the homepage and analyze its performance"
🏗️ Architecture
Multi-Runtime Support
The orchestrator uses a registry-based architecture supporting:
- npm servers: TypeScript/JavaScript servers via npx
- uvx servers: Python servers via uvx
- Built-in tools: Native orchestrator capabilities
AI Enhancement Layer
User Request → Intent Analysis → Tool Selection → Workflow Planning → Execution → Result Synthesis
⚙️ Configuration
Server Configuration
Server configurations are managed in src/orchestrator/server-configs.ts
. Each server includes:
- Runtime environment (npm, uvx, python, etc.)
- Command and arguments
- Environment requirements
- Enable/disable status
- Development phase assignment
Environment Variables
All environment variables are configured through your MCP client settings. The following variables are supported:
AI Configuration (OpenRouter):
OPENROUTER_API_KEY
(required for AI features) - Your OpenRouter API keyOPENROUTER_DEFAULT_MODEL
(optional) - Default model to use (default: "anthropic/claude-3.5-sonnet")OPENROUTER_MAX_TOKENS
(optional) - Maximum tokens per request (default: "2000")OPENROUTER_TEMPERATURE
(optional) - Temperature for AI responses (default: "0.7")
MCP Server Integrations:
GITHUB_TOKEN
(optional) - GitHub Personal Access Token for GitHub server integrationSEMGREP_APP_TOKEN
(optional) - Semgrep App Token for enhanced security scanningSLACK_BOT_TOKEN
(optional) - Slack Bot Token for Slack integration (if enabled)SLACK_APP_TOKEN
(optional) - Slack App Token for Slack integration (if enabled)
🔧 Development
Scripts
npm run build
- Build the projectnpm run dev
- Watch mode for development (TypeScript compilation)npm run start
- Start the server (for MCP client use)npm run start:dev
- Start with .env file support (for local development/testing)npm test
- Run tests (when available)
Local Development
For local development and testing, you can use the development script that loads environment variables from a .env
file:
-
Copy the example environment file:
cp .env.example .env
-
Edit
.env
with your actual API keys -
Run the development server:
npm run start:dev
Note: The regular npm start
command is intended for MCP client use and expects environment variables to be provided by the MCP client configuration.
📝 License
MIT
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。