
Elrond MCP
Enables enhanced decision-making through hierarchical LLM analysis, using three specialized critique agents (positive, neutral, negative) that analyze proposals in parallel and synthesize them into comprehensive, actionable insights. Helps overcome single-model biases by providing multi-perspective evaluation of complex ideas and proposals.
README
Elrond MCP - Thinking Augmentation Server
A Model Context Protocol (MCP) server that provides hierarchical LLM critique and synthesis for enhanced decision-making and idea evaluation.
[!WARNING] Preview Software: This is experimental software in active development and is not intended for production use. Features may change, break, or be removed without notice. Use at your own risk.
Overview
Elrond MCP implements a multi-agent thinking augmentation system that analyzes proposals through three specialized critique perspectives (positive, neutral, negative) and synthesizes them into comprehensive, actionable insights. This approach helps overcome single-model biases and provides more thorough analysis of complex ideas.
Features
- Parallel Critique Analysis: Three specialized agents analyze proposals simultaneously from different perspectives
- Structured Responses: Uses Pydantic models and
instructor
library for reliable, structured outputs - Google AI Integration: Leverages Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
- MCP Compliance: Full Model Context Protocol support for seamless integration with AI assistants
- Comprehensive Analysis: Covers feasibility, risks, benefits, implementation, stakeholder impact, and resource requirements
- Consensus Building: Identifies areas of agreement and disagreement across perspectives
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Positive │ │ Neutral │ │ Negative │
│ Critique │ │ Critique │ │ Critique │
│ Agent │ │ Agent │ │ Agent │
│ │ │ │ │ │
│ Gemini 2.5 │ │ Gemini 2.5 │ │ Gemini 2.5 │
│ Flash │ │ Flash │ │ Flash │
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ │ │
└──────────────────────┼──────────────────────┘
│
▼
┌─────────────────────────┐
│ Synthesis Agent │
│ │
│ Gemini 2.5 Pro │
│ │
│ │
│ Consensus + Summary │
└─────────────────────────┘
Installation
Prerequisites
- Python 3.13 or higher
- Google AI API key (get one at Google AI Studio)
Setup
-
Clone the repository:
git clone <repository-url> cd elrond-mcp
-
Install dependencies:
# Using uv (recommended) uv sync --dev --all-extras # Or using pip pip install -e .[dev]
-
Configure API key:
export GEMINI_API_KEY="your-gemini-api-key-here" # Or create a .env file echo "GEMINI_API_KEY=your-gemini-api-key-here" > .env
Usage
Running the Server
Development Mode
# Using uv
uv run python main.py
# Using MCP CLI (if installed)
mcp dev elrond_mcp/server.py
Production Mode
# Direct execution
python main.py
# Or via package entry point
elrond-mcp
Integration with Claude Desktop
-
Install for Claude Desktop:
mcp install elrond_mcp/server.py --name "Elrond Thinking Augmentation"
-
Manual Configuration: Add to your Claude Desktop MCP settings:
{ "elrond-mcp": { "command": "python", "args": ["/path/to/elrond-mcp/main.py"], "env": { "GEMINI_API_KEY": "your-api-key-here" } } }
Using the Tools
Augment Thinking Tool
Analyze any proposal through multi-perspective critique:
Use the "consult_the_council" tool with this proposal:
# Project Alpha: AI-Powered Customer Service
## Overview
Implement an AI chatbot to handle 80% of customer service inquiries, reducing response time from 2 hours to 30 seconds.
## Goals
- Reduce operational costs by 40%
- Improve customer satisfaction scores
- Free up human agents for complex issues
## Implementation
- Deploy GPT-4 based chatbot
- Integrate with existing CRM
- 3-month rollout plan
- $200K initial investment
Check System Status Tool
Monitor the health and configuration of the thinking augmentation system:
Use the "check_system_status" tool to verify:
- API key configuration
- Model availability
- System health
Response Structure
Critique Response
Each critique agent provides:
- Executive Summary: Brief overview of the perspective
- Structured Analysis:
- Feasibility assessment
- Risk identification
- Benefit analysis
- Implementation considerations
- Stakeholder impact
- Resource requirements
- Key Insights: 3-5 critical observations
- Confidence Level: Numerical confidence (0.0-1.0)
Synthesis Response
The synthesis agent provides:
- Executive Summary: High-level recommendation
- Consensus View:
- Areas of agreement
- Areas of disagreement
- Balanced assessment
- Critical considerations
- Recommendation: Overall guidance
- Next Steps: Concrete action items
- Uncertainty Flags: Areas needing more information
- Overall Confidence: Synthesis confidence level
Development
Project Structure
elrond-mcp/
├── elrond_mcp/
│ ├── __init__.py
│ ├── server.py # MCP server implementation
│ ├── agents.py # Critique and synthesis agents
│ ├── client.py # Centralized Google AI client management
│ └── models.py # Pydantic data models
├── scripts/ # Development scripts
│ └── check.sh # Quality check script
├── tests/ # Test suite
├── main.py # Entry point
├── pyproject.toml # Project configuration
└── README.md
Running Tests
# Using uv
uv run pytest
# Using pip
pytest
Code Formatting
# Format and lint code
uv run ruff format .
uv run ruff check --fix .
# Type checking
uv run mypy elrond_mcp/
Development Script
For convenience, use the provided script to run all quality checks:
# Run all quality checks (lint, format, test)
./scripts/check.sh
This script will:
- Sync dependencies
- Run Ruff linter with auto-fix
- Format code with Ruff
- Execute the full test suite
- Perform final lint check
- Provide a pre-commit checklist
Configuration
Environment Variables
GEMINI_API_KEY
: Required Google AI API keyLOG_LEVEL
: Logging level (default: INFO)
Model Configuration
- Critique Agents:
gemini-2.5-flash
- Synthesis Agent:
gemini-2.5-pro
Models can be customized by modifying the agent initialization in agents.py
.
Troubleshooting
Common Issues
-
API Key Not Found
Error: Google AI API key is required
Solution: Set the
GEMINI_API_KEY
environment variable -
Empty Proposal Error
Error: Proposal cannot be empty
Solution: Ensure your proposal is at least 10 characters long
-
Model Rate Limits
Error: Rate limit exceeded
Solution: Wait a moment and retry, or check your Google AI quota
-
Validation Errors
ValidationError: ...
Solution: The LLM response didn't match expected structure. This is usually temporary - retry the request
Debugging
Enable debug logging:
export LOG_LEVEL=DEBUG
export GEMINI_API_KEY=your-api-key-here
python main.py
Check system status:
# Use the check_system_status tool to verify configuration
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
License
See LICENSE
Support
For issues and questions:
- Check the troubleshooting section above
- Review the logs for detailed error information
- Open an issue on the repository
Roadmap
- [ ] Support for additional LLM providers (OpenAI, Anthropic)
- [ ] Custom critique perspectives and personas
- [ ] Performance optimization and caching
- [ ] Advanced synthesis algorithms
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。