Claude Code AI Collaboration MCP Server

Claude Code AI Collaboration MCP Server

An MCP server that enables multi-provider AI collaboration using models like DeepSeek, OpenAI, and Anthropic through strategies such as parallel execution and consensus building. It provides specialized tools for side-by-side content comparison, quality review, and iterative refinement across different AI providers.

Category
访问服务器

README

Claude Code AI Collaboration MCP Server

A powerful Model Context Protocol (MCP) server that enables AI collaboration through multiple providers with advanced strategies and comprehensive tooling.

Build Status TypeScript License: MIT Node.js

🌟 Features

🤖 Multi-Provider AI Integration

  • DeepSeek: Primary provider with optimized performance
  • OpenAI: GPT models integration
  • Anthropic: Claude models support
  • O3: Next-generation model support

🚀 Advanced Collaboration Strategies

  • Parallel: Execute requests across multiple providers simultaneously
  • Sequential: Chain provider responses for iterative improvement
  • Consensus: Build agreement through multiple provider opinions
  • Iterative: Refine responses through multiple rounds

🛠️ Comprehensive MCP Tools

  • collaborate: Multi-provider collaboration with strategy selection
  • review: Content analysis and quality assessment
  • compare: Side-by-side comparison of multiple items
  • refine: Iterative content improvement

📊 Enterprise Features

  • Caching: Memory and Redis-compatible caching system
  • Metrics: OpenTelemetry-compatible performance monitoring
  • Search: Full-text search with inverted indexing
  • Synthesis: Intelligent response aggregation

🚀 Quick Start

📖 New to MCP? Check out our Quick Start Guide for a 5-minute setup!

Prerequisites

  • Node.js 18.0.0 or higher
  • pnpm 8.0.0 or higher
  • TypeScript 5.3.0 or higher

Installation

# Clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server

# Install dependencies
pnpm install

# Build the project
pnpm run build

# Run tests
pnpm test

Configuration

  1. Environment Variables:

    # Required: Set your API keys
    export DEEPSEEK_API_KEY="your-deepseek-api-key"
    export OPENAI_API_KEY="your-openai-api-key"
    export ANTHROPIC_API_KEY="your-anthropic-api-key"
    
    # Optional: Configure other settings
    export MCP_DEFAULT_PROVIDER="deepseek"
    export MCP_PROTOCOL="stdio"
    
  2. Configuration Files:

    • config/default.yaml: Default configuration
    • config/development.yaml: Development settings
    • config/production.yaml: Production settings

Running the Server

# Start with default settings
pnpm start

# Start with specific protocol
node dist/index.js --protocol stdio

# Start with custom providers
node dist/index.js --providers deepseek,openai --default-provider deepseek

# Enable debug mode
NODE_ENV=development LOG_LEVEL=debug pnpm start

🔗 Claude Code Integration

Connecting to Claude Code

To use this MCP server with Claude Code, you need to configure Claude Code to recognize and connect to your server.

1. Automated Setup (Recommended)

Use the automated setup script for easy configuration:

# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server

# Run automated setup with your DeepSeek API key
./scripts/setup-claude-code.sh --api-key "your-deepseek-api-key"

# Or with multiple providers
./scripts/setup-claude-code.sh \
  --api-key "your-deepseek-key" \
  --openai-key "your-openai-key" \
  --anthropic-key "your-anthropic-key"

# Alternative using pnpm
pnpm run setup:claude-code -- --api-key "your-deepseek-key"

The setup script will:

  • ✅ Build the MCP server
  • ✅ Create Claude Code configuration file
  • ✅ Test the server connection
  • ✅ Provide next steps

1b. Manual Setup

If you prefer manual setup:

# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server

# Install dependencies and build
pnpm install
pnpm run build

# Set your DeepSeek API key
export DEEPSEEK_API_KEY="your-deepseek-api-key"

# Test the server
pnpm run verify-deepseek

2. Configure Claude Code

Create or update the Claude Code configuration file:

Note: There are two server options:

  • simple-server.js - Simple implementation with DeepSeek only (recommended for testing)
  • index.js - Full implementation with all providers and features

macOS/Linux:

# Create config directory if it doesn't exist
mkdir -p ~/.config/claude-code

# Create configuration file (simple server - recommended for testing)
cat > ~/.config/claude-code/claude_desktop_config.json << 'EOF'
{
  "mcpServers": {
    "ai-collaboration": {
      "command": "node",
      "args": ["/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/simple-server.js"],
      "env": {
        "DEEPSEEK_API_KEY": "your-deepseek-api-key"
      }
    }
  }
}
EOF

# Or use the full server for all features
# Replace simple-server.js with index.js in the args above

Windows:

# Create config directory
mkdir "%APPDATA%\Claude"

# Create configuration file (use your preferred text editor)
# File: %APPDATA%\Claude\claude_desktop_config.json

3. Configuration Options

{
  "mcpServers": {
    "ai-collaboration": {
      "command": "node",
      "args": [
        "/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/index.js",
        "--default-provider", "deepseek",
        "--providers", "deepseek,openai"
      ],
      "env": {
        "DEEPSEEK_API_KEY": "your-deepseek-api-key",
        "OPENAI_API_KEY": "your-openai-api-key",
        "ANTHROPIC_API_KEY": "your-anthropic-api-key",
        "NODE_ENV": "production",
        "LOG_LEVEL": "info",
        "MCP_DISABLE_CACHING": "false",
        "MCP_DISABLE_METRICS": "false"
      }
    }
  }
}

4. Available Tools in Claude Code

After restarting Claude Code, you'll have access to these powerful tools:

  • 🤝 collaborate - Multi-provider AI collaboration
  • 📝 review - Content analysis and quality assessment
  • ⚖️ compare - Side-by-side comparison of multiple items
  • ✨ refine - Iterative content improvement

5. Usage Examples in Claude Code

# Use DeepSeek for code explanation
Please use the collaborate tool to explain this Python code with DeepSeek

# Review code quality
Use the review tool to analyze the quality of this code

# Compare multiple solutions
Use the compare tool to compare these 3 approaches to solving this problem

# Improve code iteratively
Use the refine tool to make this function more efficient

6. Troubleshooting

Check MCP server connectivity:

# Test if the server starts correctly
DEEPSEEK_API_KEY="your-key" node dist/index.js --help

View logs:

# Check application logs
tail -f logs/application-$(date +%Y-%m-%d).log

Verify Claude Code configuration:

  1. Restart Claude Code completely
  2. In a new conversation, ask "What tools are available?"
  3. You should see the four MCP tools listed
  4. Test with a simple command like "Use collaborate to say hello"

7. Configuration File Locations

  • macOS: ~/.config/claude-code/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/claude-code/claude_desktop_config.json

📖 Usage

MCP Tools

Collaborate Tool

Execute multi-provider collaboration with strategy selection:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "collaborate",
    "arguments": {
      "prompt": "Explain quantum computing in simple terms",
      "strategy": "consensus",
      "providers": ["deepseek", "openai"],
      "config": {
        "timeout": 30000,
        "consensus_threshold": 0.7
      }
    }
  }
}

Review Tool

Analyze content quality and provide detailed feedback:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "review",
    "arguments": {
      "content": "Your content here...",
      "criteria": ["accuracy", "clarity", "completeness"],
      "review_type": "comprehensive"
    }
  }
}

Compare Tool

Compare multiple items with detailed analysis:

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "tools/call",
  "params": {
    "name": "compare",
    "arguments": {
      "items": [
        {"id": "1", "content": "Option A"},
        {"id": "2", "content": "Option B"}
      ],
      "comparison_dimensions": ["quality", "relevance", "innovation"]
    }
  }
}

Refine Tool

Iteratively improve content quality:

{
  "jsonrpc": "2.0",
  "id": 4,
  "method": "tools/call",
  "params": {
    "name": "refine",
    "arguments": {
      "content": "Content to improve...",
      "refinement_goals": {
        "primary_goal": "clarity",
        "target_audience": "general public"
      }
    }
  }
}

Available Resources

  • collaboration_history: Access past collaboration results
  • provider_stats: Monitor provider performance metrics
  • tool_usage: Track tool utilization statistics

🏗️ Architecture

Core Components

src/
├── core/                    # Core framework components
│   ├── types.ts            # Dependency injection symbols
│   ├── logger.ts           # Structured logging
│   ├── config.ts           # Configuration management
│   ├── container.ts        # DI container setup
│   ├── provider-manager.ts # AI provider orchestration
│   ├── strategy-manager.ts # Execution strategy management
│   └── tool-manager.ts     # MCP tool management
├── providers/              # AI provider implementations
│   ├── base-provider.ts    # Common provider functionality
│   ├── deepseek-provider.ts
│   ├── openai-provider.ts
│   ├── anthropic-provider.ts
│   └── o3-provider.ts
├── strategies/             # Collaboration strategies
│   ├── parallel-strategy.ts
│   ├── sequential-strategy.ts
│   ├── consensus-strategy.ts
│   └── iterative-strategy.ts
├── tools/                  # MCP tool implementations
│   ├── collaborate-tool.ts
│   ├── review-tool.ts
│   ├── compare-tool.ts
│   └── refine-tool.ts
├── services/               # Enterprise services
│   ├── cache-service.ts
│   ├── metrics-service.ts
│   ├── search-service.ts
│   └── synthesis-service.ts
├── server/                 # MCP server implementation
│   └── mcp-server.ts
└── types/                  # Type definitions
    ├── common.ts
    ├── interfaces.ts
    └── index.ts

Design Principles

  • Dependency Injection: Clean architecture with InversifyJS
  • Strategy Pattern: Pluggable collaboration strategies
  • Provider Abstraction: Unified interface for different AI services
  • Performance: Efficient caching and rate limiting
  • Observability: Comprehensive metrics and logging
  • Extensibility: Easy to add new providers and strategies

🔧 Configuration

Configuration Schema

The server uses YAML configuration files with JSON Schema validation. See config/schema.json for the complete schema.

Key Configuration Sections

  • Server: Basic server settings (name, version, protocol)
  • Providers: AI provider configurations and credentials
  • Strategies: Strategy-specific settings and timeouts
  • Cache: Caching behavior (memory, Redis, file)
  • Metrics: Performance monitoring settings
  • Logging: Log levels and output configuration

Environment Variables

Variable Description Default
DEEPSEEK_API_KEY DeepSeek API key Required
OPENAI_API_KEY OpenAI API key Optional
ANTHROPIC_API_KEY Anthropic API key Optional
O3_API_KEY O3 API key (defaults to OPENAI_API_KEY) Optional
MCP_PROTOCOL Transport protocol stdio
MCP_DEFAULT_PROVIDER Default AI provider deepseek
NODE_ENV Environment mode production
LOG_LEVEL Logging level info

📊 Monitoring & Metrics

Built-in Metrics

  • Request Metrics: Response times, success rates, error counts
  • Provider Metrics: Individual provider performance
  • Tool Metrics: Usage statistics per MCP tool
  • Cache Metrics: Hit rates, memory usage
  • System Metrics: CPU, memory, and resource utilization

OpenTelemetry Integration

The server supports OpenTelemetry for distributed tracing and metrics collection:

metrics:
  enabled: true
  export:
    enabled: true
    format: "opentelemetry"
    endpoint: "http://localhost:4317"

🧪 Testing

Test Coverage

  • Unit Tests: 95+ individual component tests
  • Integration Tests: End-to-end MCP protocol testing
  • E2E Tests: Complete workflow validation
  • API Tests: Direct provider API validation

Running Tests

# Run all tests
pnpm test

# Run with coverage
pnpm run test:coverage

# Run specific test suites
pnpm run test:unit
pnpm run test:integration
pnpm run test:e2e

# Verify API connectivity
pnpm run verify-deepseek

🚢 Deployment

Docker

# Build image
docker build -t claude-code-ai-collab-mcp .

# Run container
docker run -d \
  -e DEEPSEEK_API_KEY=your-key \
  -p 3000:3000 \
  claude-code-ai-collab-mcp

Production Considerations

  • Load Balancing: Multiple server instances for high availability
  • Caching: Redis for distributed caching
  • Monitoring: Prometheus/Grafana for metrics visualization
  • Security: API key rotation and rate limiting
  • Backup: Regular configuration and data backups

🤝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Development Setup

# Fork and clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server

# Install dependencies
pnpm install

# Start development
pnpm run dev

# Run tests
pnpm test

# Lint and format
pnpm run lint
pnpm run lint:fix

📋 Roadmap

Version 1.1

  • [ ] GraphQL API support
  • [ ] WebSocket transport protocol
  • [ ] Advanced caching strategies
  • [ ] Custom strategy plugins

Version 1.2

  • [ ] Multi-tenant support
  • [ ] Enhanced security features
  • [ ] Performance optimizations
  • [ ] Additional AI providers

Version 2.0

  • [ ] Distributed architecture
  • [ ] Advanced workflow orchestration
  • [ ] Machine learning optimization
  • [ ] Enterprise SSO integration

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support

🙏 Acknowledgments


Built with ❤️ by the Claude Code AI Collaboration Team# think_hub

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选