MCP AI Hub

MCP AI Hub

Provides unified access to 100+ AI models from OpenAI, Anthropic, Google, AWS Bedrock and other providers through a single MCP interface. Enables seamless switching between different AI models using LiteLM's unified API without requiring separate integrations for each provider.

Category
访问服务器

README

MCP AI Hub

A Model Context Protocol (MCP) server that provides unified access to various AI providers through LiteLM. Chat with OpenAI, Anthropic, and 100+ other AI models using a single, consistent interface.

🌟 Overview

MCP AI Hub acts as a bridge between MCP clients (like Claude Desktop/Code) and multiple AI providers. It leverages LiteLM's unified API to provide seamless access to 100+ AI models without requiring separate integrations for each provider.

Key Benefits:

  • Unified Interface: Single API for all AI providers
  • 100+ Providers: OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more
  • MCP Protocol: Native integration with Claude Desktop and Claude Code
  • Flexible Configuration: YAML-based configuration with Pydantic validation
  • Multiple Transports: stdio, SSE, and HTTP transport options
  • Custom Endpoints: Support for proxy servers and local deployments

Quick Start

1. Install

Choose your preferred installation method:

# Option A: Install from PyPI
pip install mcp-ai-hub

# Option B: Install with uv (recommended)
uv tool install mcp-ai-hub

# Option C: Install from source
pip install git+https://github.com/your-username/mcp-ai-hub.git

Installation Notes:

  • uv is a fast Python package installer and resolver
  • The package requires Python 3.10 or higher
  • All dependencies are automatically resolved and installed

2. Configure

Create a configuration file at ~/.ai_hub.yaml with your API keys and model configurations:

model_list:
  - model_name: gpt-4  # Friendly name you'll use in MCP tools
    litellm_params:
      model: openai/gpt-4  # LiteLM provider/model identifier
      api_key: "sk-your-openai-api-key-here"  # Your actual OpenAI API key
      max_tokens: 2048  # Maximum response tokens
      temperature: 0.7  # Response creativity (0.0-1.0)

  - model_name: claude-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-anthropic-api-key-here"
      max_tokens: 4096
      temperature: 0.7

Configuration Guidelines:

  • API Keys: Replace placeholder keys with your actual API keys
  • Model Names: Use descriptive names you'll remember (e.g., gpt-4, claude-sonnet)
  • LiteLM Models: Use LiteLM's provider/model format (e.g., openai/gpt-4, anthropic/claude-3-5-sonnet-20241022)
  • Parameters: Configure max_tokens, temperature, and other LiteLM-supported parameters
  • Security: Keep your config file secure with appropriate file permissions (chmod 600)

3. Connect to Claude Desktop

Configure Claude Desktop to use MCP AI Hub by editing your configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "ai-hub": {
      "command": "mcp-ai-hub"
    }
  }
}

4. Connect to Claude Code

claude mcp add -s user ai-hub mcp-ai-hub

Advanced Usage

CLI Options and Transport Types

MCP AI Hub supports multiple transport mechanisms for different use cases:

Command Line Options:

# Default stdio transport (for MCP clients like Claude Desktop)
mcp-ai-hub

# Server-Sent Events transport (for web applications)
mcp-ai-hub --transport sse --host 0.0.0.0 --port 3001

# Streamable HTTP transport (for direct API calls)
mcp-ai-hub --transport http --port 8080

# Custom config file and debug logging
mcp-ai-hub --config /path/to/config.yaml --log-level DEBUG

Transport Type Details:

Transport Use Case Default Host:Port Description
stdio MCP clients (Claude Desktop/Code) N/A Standard input/output, default for MCP
sse Web applications localhost:3001 Server-Sent Events for real-time web apps
http Direct API calls localhost:3001 (override with --port) HTTP transport with streaming support

CLI Arguments:

  • --transport {stdio,sse,http}: Transport protocol (default: stdio)
  • --host HOST: Host address for SSE/HTTP (default: localhost)
  • --port PORT: Port number for SSE/HTTP (default: 3001; override if you need a different port)
  • --config CONFIG: Custom config file path (default: ~/.ai_hub.yaml)
  • --log-level {DEBUG,INFO,WARNING,ERROR}: Logging verbosity (default: INFO)

Usage

Once MCP AI Hub is connected to your MCP client, you can interact with AI models using these tools:

MCP Tool Reference

Primary Chat Tool:

chat(model_name: str, message: str | list[dict]) -> str
  • model_name: Name of the configured model (e.g., "gpt-4", "claude-sonnet")
  • message: String message or OpenAI-style message list
  • Returns: AI model response as string

Model Discovery Tools:

list_models() -> list[str]
  • Returns: List of all configured model names
get_model_info(model_name: str) -> dict
  • model_name: Name of the configured model
  • Returns: Model configuration details including provider, parameters, etc.

Configuration

MCP AI Hub supports 100+ AI providers through LiteLM. Configure your models in ~/.ai_hub.yaml with API keys and custom parameters.

Supported Providers

Major AI Providers:

  • OpenAI: GPT-4, GPT-3.5-turbo, GPT-4-turbo, etc.
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
  • Google: Gemini Pro, Gemini Pro Vision, Gemini Ultra
  • Azure OpenAI: Azure-hosted OpenAI models
  • AWS Bedrock: Claude, Llama, Jurassic, and more
  • Together AI: Llama, Mistral, Falcon, and open-source models
  • Hugging Face: Various open-source models
  • Local Models: Ollama, LM Studio, and other local deployments

Configuration Parameters:

  • api_key: Your provider API key (required)
  • max_tokens: Maximum response tokens (optional)
  • temperature: Response creativity 0.0-1.0 (optional)
  • api_base: Custom endpoint URL (for proxies/local servers)
  • Additional: All LiteLM-supported parameters

Configuration Examples

Basic Configuration:

model_list:
  - model_name: gpt-4
    litellm_params:
      model: openai/gpt-4
      api_key: "sk-your-actual-openai-api-key"
      max_tokens: 2048
      temperature: 0.7

  - model_name: claude-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-actual-anthropic-api-key"
      max_tokens: 4096
      temperature: 0.7

Custom Parameters:

model_list:
  - model_name: gpt-4-creative
    litellm_params:
      model: openai/gpt-4
      api_key: "sk-your-openai-key"
      max_tokens: 4096
      temperature: 0.9  # Higher creativity
      top_p: 0.95
      frequency_penalty: 0.1
      presence_penalty: 0.1

  - model_name: claude-analytical
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-anthropic-key"
      max_tokens: 8192
      temperature: 0.3  # Lower creativity for analytical tasks
      stop_sequences: ["\n\n", "Human:"]

Local LLM Server Configuration:

model_list:
  - model_name: local-llama
    litellm_params:
      model: openai/llama-2-7b-chat
      api_key: "dummy-key"  # Local servers often accept any API key
      api_base: "http://localhost:8080/v1"  # Local OpenAI-compatible server
      max_tokens: 2048
      temperature: 0.7

For more providers, please refer to the LiteLLM docs: https://docs.litellm.ai/docs/providers.

Development

Setup:

# Install all dependencies including dev dependencies
uv sync

# Install package in development mode
uv pip install -e ".[dev]"

# Add new runtime dependencies
uv add package_name

# Add new development dependencies
uv add --dev package_name

# Update dependencies
uv sync --upgrade

Running and Testing:

# Run the MCP server
uv run mcp-ai-hub

# Run with custom configuration
uv run mcp-ai-hub --config ./custom_config.yaml --log-level DEBUG

# Run with different transport
uv run mcp-ai-hub --transport sse --port 3001

# Run tests (when test suite is added)
uv run pytest

# Run tests with coverage
uv run pytest --cov=src/mcp_ai_hub --cov-report=html

Code Quality:

# Format code with ruff
uv run ruff format .

# Lint code
uv run ruff check .

# Type checking with mypy
uv run mypy src/

# Run all quality checks
uv run ruff format . && uv run ruff check . && uv run mypy src/

Troubleshooting

Configuration Issues

Configuration File Problems:

  • File Location: Ensure ~/.ai_hub.yaml exists in your home directory
  • YAML Validity: Validate YAML syntax using online validators or python -c "import yaml; yaml.safe_load(open('~/.ai_hub.yaml'))"
  • File Permissions: Set secure permissions with chmod 600 ~/.ai_hub.yaml
  • Path Resolution: Use absolute paths in custom config locations

Configuration Validation:

  • Required Fields: Each model must have model_name and litellm_params
  • API Keys: Verify API keys are properly quoted and not expired
  • Model Formats: Use LiteLM-compatible model identifiers (e.g., openai/gpt-4, anthropic/claude-3-5-sonnet-20241022)

API and Authentication Errors

Authentication Issues:

  • Invalid API Keys: Check for typos, extra spaces, or expired keys
  • Insufficient Permissions: Verify API keys have necessary model access permissions
  • Rate Limiting: Monitor API usage and implement retry logic if needed
  • Regional Restrictions: Some models may not be available in all regions

API-Specific Troubleshooting:

  • OpenAI: Check organization settings and model availability
  • Anthropic: Verify Claude model access and usage limits
  • Azure OpenAI: Ensure proper resource deployment and endpoint configuration
  • Google Gemini: Check project setup and API enablement

MCP Connection Issues

Server Startup Problems:

  • Port Conflicts: Use different ports for SSE/HTTP transports if defaults are in use
  • Permission Errors: Ensure executable permissions for mcp-ai-hub command
  • Python Path: Verify Python environment and package installation

Client Configuration Issues:

  • Command Path: Ensure mcp-ai-hub is in PATH or use full absolute path
  • Working Directory: Some MCP clients require specific working directory settings
  • Transport Mismatch: Use stdio transport for Claude Desktop/Code

Performance and Reliability

Response Time Issues:

  • Network Latency: Use geographically closer API endpoints when possible
  • Model Selection: Some models are faster than others (e.g., GPT-3.5 vs GPT-4)
  • Token Limits: Large max_tokens values can increase response time

Reliability Improvements:

  • Retry Logic: Implement exponential backoff for transient failures
  • Timeout Configuration: Set appropriate timeouts for your use case
  • Health Checks: Monitor server status and restart if needed
  • Load Balancing: Use multiple model configurations for redundancy

License

MIT License - see LICENSE file for details.

Contributing

We welcome contributions! Please follow these guidelines:

Development Workflow

  1. Fork and Clone: Fork the repository and clone your fork
  2. Create Branch: Create a feature branch (git checkout -b feature/amazing-feature)
  3. Development Setup: Install dependencies with uv sync
  4. Make Changes: Implement your feature or fix
  5. Testing: Add tests and ensure all tests pass
  6. Code Quality: Run formatting, linting, and type checking
  7. Documentation: Update documentation if needed
  8. Submit PR: Create a pull request with detailed description

Code Standards

Python Style:

  • Follow PEP 8 style guidelines
  • Use type hints for all functions
  • Add docstrings for public functions and classes
  • Keep functions focused and small

Testing Requirements:

  • Write tests for new functionality
  • Ensure existing tests continue to pass
  • Aim for good test coverage
  • Test edge cases and error conditions

Documentation:

  • Update README.md for user-facing changes
  • Add inline comments for complex logic
  • Update configuration examples if needed
  • Document breaking changes clearly

Quality Checks

Before submitting a PR, ensure:

# All tests pass
uv run pytest

# Code formatting
uv run ruff format .

# Linting passes
uv run ruff check .

# Type checking passes
uv run mypy src/

# Documentation is up to date
# Configuration examples are valid

Issues and Feature Requests

  • Use GitHub Issues for bug reports and feature requests
  • Provide detailed reproduction steps for bugs
  • Include configuration examples when relevant
  • Check existing issues before creating new ones
  • Label issues appropriately

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选