MCP AI Bridge

MCP AI Bridge

A secure Model Context Protocol server that enables Claude Code to connect with OpenAI and Google Gemini models, allowing users to query multiple AI providers through a standardized interface.

Category
访问服务器

Tools

ask_openai

Ask OpenAI GPT models a question

ask_gemini

Ask Google Gemini AI a question

server_info

Get server status and configuration

README

MCP AI Bridge

A secure Model Context Protocol (MCP) server that bridges Claude Code with OpenAI and Google Gemini APIs.

<a href="https://glama.ai/mcp/servers/@fakoli/mcp-ai-bridge"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@fakoli/mcp-ai-bridge/badge" alt="AI Bridge MCP server" /> </a>

Features

  • OpenAI Integration: Access GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-4, and reasoning models (o1, o1-mini, o1-pro, o3-mini)
  • Gemini Integration: Access Gemini 1.5 Pro, Gemini 1.5 Flash, and vision models with latest capabilities
  • Security Features:
    • Enhanced Input Validation: Multi-layer validation with sanitization
    • Content Filtering: Blocks explicit, harmful, and illegal content
    • Prompt Injection Detection: Identifies and blocks manipulation attempts
    • Rate Limiting: Prevents API abuse with configurable limits
    • Secure Error Handling: No sensitive information exposure
    • API Key Validation: Format validation for API keys
    • Configurable Security Levels: Basic, Moderate, and Strict modes
  • Robust Error Handling: Specific error types with detailed messages
  • Structured Logging: Winston-based logging with configurable levels
  • Flexible Configuration: Control temperature and model selection for each request

Installation

  1. Clone or copy the mcp-ai-bridge directory to your preferred location

  2. Install dependencies:

cd mcp-ai-bridge
npm install
  1. Configure your API keys using ONE of these methods:

    Option A: Use global .env file in your home directory (Recommended)

    • Create or edit ~/.env file
    • Add your API keys:
      OPENAI_API_KEY=your_openai_api_key_here
      GOOGLE_AI_API_KEY=your_google_ai_api_key_here
      

    Option B: Use local .env file

    • Create a .env file in the mcp-ai-bridge directory:
      cp .env.example .env
      
    • Add your API keys to this local .env file

    Option C: Use environment variables in Claude Code config

    • Configure directly in the Claude Code settings (see Configuration section)

The server will check for environment variables in this order:

  1. ~/.env (your home directory)

  2. ./.env (local to mcp-ai-bridge directory)

  3. System environment variables

  4. Optional Configuration Variables:

    # Logging level (error, warn, info, debug)
    LOG_LEVEL=info
    
    # Server identification
    MCP_SERVER_NAME=AI Bridge
    MCP_SERVER_VERSION=1.0.0
    
    # Security Configuration
    SECURITY_LEVEL=moderate              # disabled, basic, moderate, strict
    
    # Content Filtering (granular controls)
    BLOCK_EXPLICIT_CONTENT=true         # Master content filter toggle
    BLOCK_VIOLENCE=true                  # Block violent content
    BLOCK_ILLEGAL_ACTIVITIES=true       # Block illegal activity requests
    BLOCK_ADULT_CONTENT=true             # Block adult/sexual content
    
    # Injection Detection (granular controls)
    DETECT_PROMPT_INJECTION=true        # Master injection detection toggle
    DETECT_SYSTEM_PROMPTS=true           # Detect system role injections
    DETECT_INSTRUCTION_OVERRIDE=true     # Detect "ignore instructions" attempts
    
    # Input Sanitization (granular controls)
    SANITIZE_INPUT=true                  # Master sanitization toggle
    REMOVE_SCRIPTS=true                  # Remove script tags and JS
    LIMIT_REPEATED_CHARS=true            # Limit DoS via repeated characters
    
    # Performance & Flexibility
    ENABLE_PATTERN_CACHING=true          # Cache compiled patterns for speed
    MAX_PROMPT_LENGTH_FOR_DEEP_SCAN=1000 # Skip deep scanning for long prompts
    ALLOW_EDUCATIONAL_CONTENT=false      # Whitelist educational content
    WHITELIST_PATTERNS=                  # Comma-separated regex patterns to allow
    

Configuration in Claude Code

Method 1: Using Claude Code CLI (Recommended)

Use the interactive MCP setup wizard:

claude mcp add

Or add the server configuration directly:

claude mcp add-json ai-bridge '{"command": "node", "args": ["/path/to/mcp-ai-bridge/src/index.js"]}'

Method 2: Manual Configuration

Add the following to your Claude Code MCP settings. The configuration file location depends on your environment:

  • Claude Code CLI: Uses settings.json in the configuration directory (typically ~/.claude/ or $CLAUDE_CONFIG_DIR)
  • Claude Desktop: Uses ~/.claude/claude_desktop_config.json

For Claude Desktop compatibility:

{
  "mcpServers": {
    "ai-bridge": {
      "command": "node",
      "args": ["/path/to/mcp-ai-bridge/src/index.js"],
      "env": {
        "OPENAI_API_KEY": "your_openai_api_key",
        "GOOGLE_AI_API_KEY": "your_google_ai_api_key"
      }
    }
  }
}

Alternatively, if you have the .env file configured, you can omit the env section:

{
  "mcpServers": {
    "ai-bridge": {
      "command": "node",
      "args": ["/path/to/mcp-ai-bridge/src/index.js"]
    }
  }
}

Method 3: Import from Claude Desktop

If you already have this configured in Claude Desktop, you can import the configuration:

claude mcp add-from-claude-desktop

Available Tools

1. ask_openai

Query OpenAI models with full validation and security features.

Parameters:

  • prompt (required): The question or prompt to send (max 10,000 characters)
  • model (optional): Choose from 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'o1', 'o1-mini', 'o1-pro', 'o3-mini', 'chatgpt-4o-latest', and other available models (default: 'gpt-4o-mini')
  • temperature (optional): Control randomness (0-2, default: 0.7)

Security Features:

  • Input validation for prompt length and type
  • Temperature range validation
  • Model validation
  • Rate limiting (100 requests per minute by default)

2. ask_gemini

Query Google Gemini models with full validation and security features.

Parameters:

  • prompt (required): The question or prompt to send (max 10,000 characters)
  • model (optional): Choose from 'gemini-1.5-pro-latest', 'gemini-1.5-pro-002', 'gemini-1.5-pro', 'gemini-1.5-flash-latest', 'gemini-1.5-flash', 'gemini-1.5-flash-002', 'gemini-1.5-flash-8b', 'gemini-1.0-pro-vision-latest', 'gemini-pro-vision' (default: 'gemini-1.5-flash-latest')
  • temperature (optional): Control randomness (0-1, default: 0.7)

Security Features:

  • Input validation for prompt length and type
  • Temperature range validation
  • Model validation
  • Rate limiting (100 requests per minute by default)

3. server_info

Get comprehensive server status and configuration information.

Returns:

  • Server name and version
  • Available models for each service
  • Security settings (rate limits, validation status)
  • Configuration status for each API

Usage Examples

In Claude Code, you can use these tools like:

mcp__ai-bridge__ask_openai
  prompt: "Explain the concept of recursion in programming"
  model: "gpt-4o"
  temperature: 0.5

mcp__ai-bridge__ask_gemini
  prompt: "What are the key differences between Python and JavaScript?"
  model: "gemini-1.5-flash-latest"

mcp__ai-bridge__server_info

Debugging MCP Server

If you encounter issues with the MCP server, you can use Claude Code's debugging features:

# Enable MCP debug mode for detailed error information
claude --mcp-debug

# Check MCP server status and tools
claude
# Then use the /mcp slash command to view server details

Testing

The project includes comprehensive unit tests and security tests. To run tests:

# Run all tests (including security tests)
npm test

# Run tests in watch mode
npm run test:watch

# Run tests with coverage report
npm run test:coverage

Test Coverage

  • Unit tests for all server functionality
  • Security tests for input validation and rate limiting
  • Integration tests for API interactions
  • Error handling tests
  • Mock-based testing to avoid real API calls

Troubleshooting

Common Issues

  1. "API key not configured" error: Make sure you've added the correct API keys to your .env file or Claude Code config
  2. "Invalid OpenAI API key format" error: OpenAI keys must start with 'sk-'
  3. "Rate limit exceeded" error: Wait for the rate limit window to reset (default: 1 minute)
  4. "Prompt too long" error: Keep prompts under 10,000 characters
  5. Module not found errors: Run npm install in the mcp-ai-bridge directory
  6. Permission errors: Ensure the index.js file has execute permissions
  7. Logging issues: Set LOG_LEVEL environment variable (error, warn, info, debug)

Claude Code Specific Troubleshooting

  1. MCP server not loading:

    • Use claude --mcp-debug to see detailed error messages
    • Check server configuration with /mcp slash command
    • Verify the server path is correct and accessible
    • Ensure Node.js is installed and in your PATH
  2. Configuration issues:

    • Use claude mcp add for interactive setup
    • Check CLAUDE_CONFIG_DIR environment variable if using custom config location
    • For timeouts, configure MCP_TIMEOUT and MCP_TOOL_TIMEOUT environment variables
  3. Server startup failures:

    • Check if the server process can start independently: node /path/to/mcp-ai-bridge/src/index.js
    • Verify all dependencies are installed
    • Check file permissions on the server directory

Security Features

Enhanced Security Protection

  • Multi-Layer Input Validation: Type, length, and content validation
  • Content Filtering: Blocks explicit, violent, illegal, and harmful content
  • Prompt Injection Detection: Identifies and prevents manipulation attempts including:
    • Instruction override attempts ("ignore previous instructions")
    • System role injection ("system: act as...")
    • Template injection ({{system}}, <|system|>, [INST])
    • Suspicious pattern detection
  • Input Sanitization: Removes control characters, scripts, and malicious patterns
  • Rate Limiting: 100 requests per minute by default to prevent API abuse
  • API Key Validation: Format validation for API keys before use
  • Secure Error Handling: No stack traces or sensitive information in error messages
  • Structured Logging: All operations are logged with appropriate levels

Security Levels

  • Basic: Minimal filtering, allows most content
  • Moderate (Default): Balanced protection with reasonable restrictions
  • Strict: Maximum protection, blocks borderline content

Granular Security Configuration

Security Levels:

  • disabled - No security checks (maximum performance)
  • basic - Essential protection only (good performance)
  • moderate - Balanced protection (default, good balance)
  • strict - Maximum protection (may impact performance)

Individual Feature Controls:

# Master toggles
SECURITY_LEVEL=moderate
BLOCK_EXPLICIT_CONTENT=true
DETECT_PROMPT_INJECTION=true
SANITIZE_INPUT=true

# Granular content filtering
BLOCK_VIOLENCE=true                  # "how to kill", violence
BLOCK_ILLEGAL_ACTIVITIES=true       # "how to hack", illegal acts
BLOCK_ADULT_CONTENT=true            # Sexual/adult content

# Granular injection detection
DETECT_SYSTEM_PROMPTS=true           # "system: act as admin"
DETECT_INSTRUCTION_OVERRIDE=true     # "ignore previous instructions"

# Granular sanitization
REMOVE_SCRIPTS=true                  # Remove <script> tags
LIMIT_REPEATED_CHARS=true           # Prevent character flooding

# Performance optimization
ENABLE_PATTERN_CACHING=true         # Cache patterns for speed
MAX_PROMPT_LENGTH_FOR_DEEP_SCAN=1000 # Skip intensive checks on long prompts

# Flexibility options
ALLOW_EDUCATIONAL_CONTENT=true      # Whitelist "research about", "explain"
WHITELIST_PATTERNS="educational,academic" # Custom regex patterns

Performance Considerations:

  • Pattern caching reduces regex compilation overhead
  • Long prompts (>1000 chars) get lighter scanning in basic mode
  • Early termination stops checking after finding issues
  • Granular controls let you disable unneeded checks

Best Practices

  • Never commit your .env file to version control
  • Keep your API keys secure and rotate them regularly
  • Consider setting usage limits on your API accounts
  • Monitor logs for unusual activity
  • Use the rate limiting feature to control costs
  • Validate the server configuration using the server_info tool

Rate Limiting

The server implements sliding window rate limiting:

  • Default: 100 requests per minute
  • Configurable via environment variables
  • Per-session tracking
  • Graceful error messages with reset time information

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选