
mcp-rubber-duck
An MCP server that acts as a bridge to query multiple OpenAI-compatible LLMs with MCP tool access. Just like rubber duck debugging, explain your problems to various AI "ducks" who can actually research and get different perspectives!
README
🦆 MCP Rubber Duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
__
<(o )___
( ._> /
`---' Quack! Ready to debug!
Features
- 🔌 Universal OpenAI Compatibility: Works with any OpenAI-compatible API endpoint
- 🦆 Multiple Ducks: Configure and query multiple LLM providers simultaneously
- 💬 Conversation Management: Maintain context across multiple messages
- 🏛️ Duck Council: Get responses from all your configured LLMs at once
- 💾 Response Caching: Avoid duplicate API calls with intelligent caching
- 🔄 Automatic Failover: Falls back to other providers if primary fails
- 📊 Health Monitoring: Real-time health checks for all providers
- 🎨 Fun Duck Theme: Rubber duck debugging with personality!
Supported Providers
Any provider with an OpenAI-compatible API endpoint, including:
- OpenAI (GPT-4, GPT-3.5)
- Google Gemini (Gemini 2.5 Flash, Gemini 2.0 Flash)
- Anthropic (via OpenAI-compatible endpoints)
- Groq (Llama, Mixtral, Gemma)
- Together AI (Llama, Mixtral, and more)
- Perplexity (Online models with web search)
- Anyscale (Open source models)
- Azure OpenAI (Microsoft-hosted OpenAI)
- Ollama (Local models)
- LM Studio (Local models)
- Custom (Any OpenAI-compatible endpoint)
Quick Start
For Claude Desktop Users
👉 Complete Claude Desktop setup instructions below in Claude Desktop Configuration
Installation
Prerequisites
- Node.js 20 or higher
- npm or yarn
- At least one API key for a supported provider
Install from Source
# Clone the repository
git clone https://github.com/yourusername/mcp-rubber-duck.git
cd mcp-rubber-duck
# Install dependencies
npm install
# Build the project
npm run build
# Run the server
npm start
Configuration
Method 1: Environment Variables
Create a .env
file in the project root:
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_DEFAULT_MODEL=gpt-4o-mini # Optional: defaults to gpt-4o-mini
# Google Gemini
GEMINI_API_KEY=...
GEMINI_DEFAULT_MODEL=gemini-2.5-flash # Optional: defaults to gemini-2.5-flash
# Groq
GROQ_API_KEY=gsk_...
GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile # Optional: defaults to llama-3.3-70b-versatile
# Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434/v1 # Optional
OLLAMA_DEFAULT_MODEL=llama3.2 # Optional: defaults to llama3.2
# Together AI
TOGETHER_API_KEY=...
# Custom Provider
CUSTOM_API_KEY=...
CUSTOM_BASE_URL=https://api.example.com/v1
CUSTOM_DEFAULT_MODEL=custom-model # Optional: defaults to custom-model
# Global Settings
DEFAULT_PROVIDER=openai
DEFAULT_TEMPERATURE=0.7
LOG_LEVEL=info
# Optional: Custom Duck Nicknames (Have fun with these!)
OPENAI_NICKNAME="DUCK-4" # Optional: defaults to "GPT Duck"
GEMINI_NICKNAME="Duckmini" # Optional: defaults to "Gemini Duck"
GROQ_NICKNAME="Quackers" # Optional: defaults to "Groq Duck"
OLLAMA_NICKNAME="Local Quacker" # Optional: defaults to "Local Duck"
CUSTOM_NICKNAME="My Special Duck" # Optional: defaults to "Custom Duck"
Note: Duck nicknames are completely optional! If you don't set them, you'll get the charming defaults (GPT Duck, Gemini Duck, etc.). If you use a config.json
file, those nicknames take priority over environment variables.
Method 2: Configuration File
Create a config/config.json
file based on the example:
cp config/config.example.json config/config.json
# Edit config/config.json with your API keys and preferences
Claude Desktop Configuration
This is the most common setup method for using MCP Rubber Duck with Claude Desktop.
Step 1: Build the Project
First, ensure the project is built:
# Clone the repository
git clone https://github.com/yourusername/mcp-rubber-duck.git
cd mcp-rubber-duck
# Install dependencies and build
npm install
npm run build
Step 2: Configure Claude Desktop
Edit your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the MCP server configuration:
{
"mcpServers": {
"rubber-duck": {
"command": "node",
"args": ["/absolute/path/to/mcp-rubber-duck/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-4o-mini",
"GEMINI_API_KEY": "your-gemini-api-key-here",
"GEMINI_DEFAULT_MODEL": "gemini-2.5-flash",
"DEFAULT_PROVIDER": "openai",
"LOG_LEVEL": "info"
}
}
}
}
Important: Replace the placeholder API keys with your actual keys:
your-openai-api-key-here
→ Your OpenAI API key (starts withsk-
)your-gemini-api-key-here
→ Your Gemini API key from Google AI Studio
Step 3: Restart Claude Desktop
- Completely quit Claude Desktop (⌘+Q on Mac)
- Launch Claude Desktop again
- The MCP server should connect automatically
Step 4: Test the Integration
Once restarted, test these commands in Claude:
Check Duck Health
Use the list_ducks tool with check_health: true
Should show:
- ✅ GPT Duck (openai) - Healthy
- ✅ Gemini Duck (gemini) - Healthy
List Available Models
Use the list_models tool
Ask a Specific Duck
Use the ask_duck tool with prompt: "What is rubber duck debugging?", provider: "openai"
Compare Multiple Ducks
Use the compare_ducks tool with prompt: "Explain async/await in JavaScript"
Test Specific Models
Use the ask_duck tool with prompt: "Hello", provider: "openai", model: "gpt-4"
Troubleshooting Claude Desktop Setup
If Tools Don't Appear
- Check API Keys: Ensure your API keys are correctly entered without typos
- Verify Build: Run
ls -la dist/index.js
to confirm the project built successfully - Check Logs: Look for errors in Claude Desktop's developer console
- Restart: Fully quit and restart Claude Desktop after config changes
Connection Issues
- Config File Path: Double-check you're editing the correct config file path
- JSON Syntax: Validate your JSON syntax (no trailing commas, proper quotes)
- Absolute Paths: Ensure you're using the full absolute path to
dist/index.js
- File Permissions: Verify Claude Desktop can read the dist directory
Health Check Failures
If ducks show as unhealthy:
- API Keys: Verify keys are valid and have sufficient credits/quota
- Network: Check internet connection and firewall settings
- Rate Limits: Some providers have strict rate limits for new accounts
Available Tools
🦆 ask_duck
Ask a single question to a specific LLM provider.
{
"prompt": "What is rubber duck debugging?",
"provider": "openai", // Optional, uses default if not specified
"temperature": 0.7 // Optional
}
💬 chat_with_duck
Have a conversation with context maintained across messages.
{
"conversation_id": "debug-session-1",
"message": "Can you help me debug this code?",
"provider": "groq" // Optional, can switch providers mid-conversation
}
📋 list_ducks
List all configured providers and their health status.
{
"check_health": true // Optional, performs fresh health check
}
📊 list_models
List available models for LLM providers.
{
"provider": "openai", // Optional, lists all if not specified
"fetch_latest": false // Optional, fetch latest from API vs cached
}
🔍 compare_ducks
Ask the same question to multiple providers simultaneously.
{
"prompt": "What's the best programming language?",
"providers": ["openai", "groq", "ollama"] // Optional, uses all if not specified
}
🏛️ duck_council
Get responses from all configured ducks - like a panel discussion!
{
"prompt": "How should I architect a microservices application?"
}
Usage Examples
Basic Query
// Ask the default duck
await ask_duck({
prompt: "Explain async/await in JavaScript"
});
Conversation
// Start a conversation
await chat_with_duck({
conversation_id: "learning-session",
message: "What is TypeScript?"
});
// Continue the conversation
await chat_with_duck({
conversation_id: "learning-session",
message: "How does it differ from JavaScript?"
});
Compare Responses
// Get different perspectives
await compare_ducks({
prompt: "What's the best way to handle errors in Node.js?",
providers: ["openai", "groq", "ollama"]
});
Duck Council
// Convene the council for important decisions
await duck_council({
prompt: "Should I use REST or GraphQL for my API?"
});
Provider-Specific Setup
Ollama (Local)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
# Ollama automatically provides OpenAI-compatible endpoint at localhost:11434/v1
LM Studio (Local)
- Download LM Studio from https://lmstudio.ai/
- Load a model in LM Studio
- Start the local server (provides OpenAI-compatible endpoint at localhost:1234/v1)
Google Gemini
- Get API key from Google AI Studio
- Add to environment:
GEMINI_API_KEY=...
- Uses OpenAI-compatible endpoint (beta)
Groq
- Get API key from https://console.groq.com/keys
- Add to environment:
GROQ_API_KEY=gsk_...
Together AI
- Get API key from https://api.together.xyz/
- Add to environment:
TOGETHER_API_KEY=...
Verifying OpenAI Compatibility
To check if a provider is OpenAI-compatible:
- Look for
/v1/chat/completions
endpoint in their API docs - Check if they support the OpenAI SDK
- Test with curl:
curl -X POST "https://api.provider.com/v1/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "model-name",
"messages": [{"role": "user", "content": "Hello"}]
}'
Development
Run in Development Mode
npm run dev
Run Tests
npm test
Lint Code
npm run lint
Type Checking
npm run typecheck
Docker Support
Build Docker Image
docker build -t mcp-rubber-duck .
Run with Docker
docker run -it \
-e OPENAI_API_KEY=sk-... \
-e GROQ_API_KEY=gsk_... \
mcp-rubber-duck
Architecture
mcp-rubber-duck/
├── src/
│ ├── server.ts # MCP server implementation
│ ├── config/ # Configuration management
│ ├── providers/ # OpenAI client wrapper
│ ├── tools/ # MCP tool implementations
│ ├── services/ # Health, cache, conversations
│ └── utils/ # Logging, ASCII art
├── config/ # Configuration examples
└── tests/ # Test suites
Troubleshooting
Provider Not Working
- Check API key is correctly set
- Verify endpoint URL is correct
- Run health check:
list_ducks({ check_health: true })
- Check logs for detailed error messages
Connection Issues
- For local providers (Ollama, LM Studio), ensure they're running
- Check firewall settings for local endpoints
- Verify network connectivity to cloud providers
Rate Limiting
- Enable caching to reduce API calls
- Configure failover to alternate providers
- Adjust
max_retries
andtimeout
settings
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Inspired by the rubber duck debugging method
- Built on the Model Context Protocol (MCP)
- Uses OpenAI SDK for universal compatibility
Support
- Report issues: https://github.com/yourusername/mcp-rubber-duck/issues
- Documentation: https://github.com/yourusername/mcp-rubber-duck/wiki
- Discussions: https://github.com/yourusername/mcp-rubber-duck/discussions
🦆 Happy Debugging with your AI Duck Panel! 🦆
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。