MCP Code Reviewer
An AI-powered server for code analysis, requirements validation, and automated fix proposals with human-in-the-loop confirmation. It supports multiple LLM providers and ensures safe file modifications through automatic backups and path validation.
README
MCP Code Refiner
A powerful second-layer LLM MCP server that refines and reviews code using AI. Perfect for improving AI-generated code or enhancing human-written code through natural language feedback.
What is This?
This is an MCP (Model Context Protocol) server that adds code refinement and review capabilities to any MCP client like Claude Desktop. It acts as a "second layer" AI that specializes in code improvement, working alongside your primary AI assistant.
Use it to:
- Refine code generated by ChatGPT, Claude, or any AI with natural language feedback
- Get comprehensive code reviews with security and performance analysis
- Iteratively improve code until it meets your standards
- Learn from AI-suggested improvements
Features
- Code Refinement - Improve code with natural language feedback ("make it more logical", "add error handling")
- Code Review - AI-powered analysis for bugs, security, performance, and best practices
- Multi-Model Support - Choose between Gemini, Claude, or OpenAI models
- Plug & Play - Works with Claude Desktop and any MCP client
- Smart Prompts - Optimized prompts for high-quality, actionable results
- Diff View - See exactly what changes before applying them
Quick Start
Prerequisites
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
1. Clone and Install
git clone https://github.com/yourusername/mcp_code_review.git
cd mcp_code_review
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
2. Configure API Keys
Create a .env file from the example:
cp .env.example .env
Edit .env and add at least ONE API key:
# Recommended: Google Gemini (free tier available)
GOOGLE_API_KEY=your-gemini-api-key-here
# Alternative: Anthropic Claude
ANTHROPIC_API_KEY=your-anthropic-api-key-here
# Alternative: OpenAI
OPENAI_API_KEY=your-openai-api-key-here
Get API keys from:
- Gemini: https://ai.google.dev/
- Claude: https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
3. Connect to Claude Desktop
Edit your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add the server configuration:
{
"mcpServers": {
"code-refiner": {
"command": "python",
"args": ["/absolute/path/to/mcp_code_review/mcp_server.py"],
"env": {
"GOOGLE_API_KEY": "your-gemini-api-key"
}
}
}
}
Important: Replace /absolute/path/to/mcp_code_review/ with the actual path on your system.
Restart Claude Desktop to load the server.
Usage
Once configured, just talk to Claude naturally in Claude Desktop. The tools are automatically available!
Code Refinement
Improve existing code with natural language instructions:
You: "Refine ./my_script.py to make it more logical and add error handling"
Claude will:
- Call
refine_code_toolwith your request - Show you a diff of proposed changes
- Explain what was changed and why
- Ask for your approval
- Apply changes with
apply_refinement_toolif you confirm
Code Review
Get comprehensive code analysis:
You: "Review ./server.py for security issues and performance problems"
Claude will:
- Call
review_code_toolon the file - Show issues found with severity levels (high/medium/low)
- Highlight code strengths
- Provide an overall quality score
- Suggest specific improvements
Real-World Examples
Refinement:
- "Make ./app.py more performant by optimizing loops"
- "Simplify the logic in ./utils/helper.py"
- "Add comprehensive error handling to ./api/routes.py"
- "Refactor ./legacy_code.py to follow modern Python best practices"
- "Add type hints and docstrings to ./calculator.py"
Review:
- "Review ./authentication.py for security vulnerabilities"
- "Check ./database.py for SQL injection risks"
- "Analyze ./api_client.py for error handling issues"
- "Review ./main.py and suggest improvements"
Available Models
Configure via ai_provider parameter or Claude will use the default (gemini).
Gemini (Google)
gemini- Gemini 2.0 Flash (fast, free tier)gemini-pro- Gemini 1.5 Pro (more capable)
Claude (Anthropic)
claudeorclaude-sonnet- Claude 3.5 Sonnet (high quality)claude-opus- Claude 3 Opus (most capable)claude-haiku- Claude 3.5 Haiku (fastest)
OpenAI
openaiorgpt-4o- GPT-4o (balanced)gpt-4- GPT-4 Turbogpt-3.5- GPT-3.5 Turbo (fastest)
MCP Tools Reference
This server provides three MCP tools that Claude Desktop can call automatically:
1. refine_code_tool
Purpose: Improves existing code based on natural language feedback using a second-layer LLM.
Parameters:
user_request(string, required) - What you want to improve (e.g., "make it more logical", "add error handling")file_path(string, required) - Path to the code file to refineai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
{
"status": "success",
"explanation": "Added error handling and simplified logic...",
"diff": "--- original\n+++ refined\n...",
"refined_code": "def improved_function():\n ...",
"file_path": "./app.py"
}
2. review_code_tool
Purpose: Analyzes code for bugs, security vulnerabilities, performance issues, and quality.
Parameters:
file_path(string, required) - Path to the code file to reviewai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
{
"status": "success",
"issues": [
{
"severity": "high",
"category": "security",
"issue": "SQL injection vulnerability",
"line": 42,
"suggestion": "Use parameterized queries..."
}
],
"strengths": ["Good error handling", "Clear naming"],
"overall_assessment": "Code is functional but has security concerns...",
"score": 7
}
3. apply_refinement_tool
Purpose: Applies refined code to the file after user approval.
Parameters:
file_path(string, required) - Path to the file to updaterefined_code(string, required) - The improved code fromrefine_code_tool
Returns:
{
"status": "success",
"message": "Code successfully applied to ./app.py"
}
Important: Only use this after the user has reviewed and approved the changes!
Testing
Test the server without Claude Desktop:
python client.py
This runs a simple test client to verify the server works.
Project Structure
mcp_code_review/
├── mcp_server.py # Main MCP server entry point
├── client.py # Test client for local testing
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .env # Your API keys (git-ignored)
│
├── tools/ # MCP tool implementations
│ ├── __init__.py
│ ├── file_ops.py # File read/write utilities
│ ├── code_refinement.py # Code refinement logic
│ └── code_review.py # Code review logic
│
├── prompts/ # AI prompt templates
│ ├── code_refinement.txt # Refinement prompt template
│ └── code_review.txt # Review prompt template
│
└── utils/ # Helper utilities
├── __init__.py
├── llm_client.py # LiteLLM wrapper for multi-provider support
└── diff_generator.py # Unified diff generation
How It Works
This server implements a "second-layer LLM" architecture:
- You interact with Claude Desktop (first-layer AI) using natural language
- Claude understands your intent and calls the appropriate MCP tool
- MCP Server receives the request and invokes a second-layer LLM specialized for code tasks
- Second-layer LLM analyzes or refines the code using optimized prompts
- Results are returned to Claude with diffs, explanations, and suggestions
- Claude presents the results to you for review
- You approve or reject the changes
- Changes are applied only after your confirmation
This two-layer approach combines Claude's conversational abilities with specialized code analysis/refinement models.
Use Cases
1. Refining AI-Generated Code
First LLM generates code → Use this to improve it
2. Code Review Assistant
Get AI-powered feedback on your code
3. Iterative Improvement
Keep refining until perfect
4. Learning Tool
See how AI would improve your code and learn from it
Requirements
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
- Dependencies listed in
requirements.txt:fastmcp- FastMCP frameworkmcp- Model Context Protocol SDKlitellm- Multi-provider LLM wrapperrich- Terminal formattingpython-dotenv- Environment variable management
Troubleshooting
Server Not Appearing in Claude Desktop
- Check that the path in
claude_desktop_config.jsonis absolute, not relative - Verify the Python path is correct (use
which pythonin your activated venv) - Check Claude Desktop logs for errors:
- macOS:
~/Library/Logs/Claude/ - Windows:
%APPDATA%\Claude\logs\
- macOS:
- Restart Claude Desktop after config changes
API Key Errors
- Verify your API key is correct in the
.envfile - Make sure the key is also in the
claude_desktop_config.jsonenv section - Check that you have API credits/quota remaining
- Try using a different AI provider as a fallback
File Path Issues
- Always use absolute paths or paths relative to where you run the command
- On Windows, use forward slashes
/or escaped backslashes\\ - Verify the file exists:
ls /path/to/file.py
Module Import Errors
- Ensure virtual environment is activated
- Reinstall dependencies:
pip install -r requirements.txt --upgrade - Check Python version:
python --version(must be 3.10+)
Testing the Server
Run the test client to verify the server works:
python client.py
This bypasses Claude Desktop and tests the MCP server directly.
Contributing
Contributions are welcome! Here's how you can help:
- Report bugs - Open an issue with details about the problem
- Suggest features - Share ideas for new capabilities
- Improve prompts - The prompt templates in
prompts/can always be refined - Add AI providers - Extend support for additional LLM providers
- Submit PRs - Fix bugs, add features, improve documentation
License
MIT License - see LICENSE file for details
Acknowledgments
Built with:
- FastMCP - FastMCP framework for building MCP servers
- LiteLLM - Unified interface for multiple LLM providers
- MCP Protocol - Model Context Protocol specification
Resources
Questions or issues? Open an issue on GitHub or check the troubleshooting section above.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。