
Pure Agentic MCP Server
A Model Context Protocol implementation with a modular architecture that exposes capabilities through specialized agents, enabling seamless integration with Claude Desktop and web applications.
README
Pure Agentic MCP Server
A pure implementation of the Model Context Protocol (MCP) following an agentic architecture where all features are exposed as MCP tools through specialized agents.
Features
- 🤖 Pure Agentic Architecture: All capabilities (OpenAI, Ollama, File operations) are implemented as agents
- 🔗 Dual Access Modes: MCP protocol for Claude Desktop + HTTP endpoints for web/Streamlit UI
- ⚡ Dynamic Tool Registry: Agents register their tools automatically at startup
- 🔧 Modular Design: Add new agents easily without modifying core server code
- 📱 Clean Web UI: Modern Streamlit interface for interactive tool usage
- 🛡️ Graceful Degradation: Agents fail independently without affecting the system
- 🔑 Environment-Based Config: Secure API key management via environment variables
Architecture Overview
The server implements a pure agentic pattern where:
- Agents encapsulate specific functionality (OpenAI API, Ollama, file operations)
- Registry manages dynamic tool registration and routing
- MCP Server provides JSON-RPC protocol compliance for Claude Desktop
- HTTP Host exposes tools via REST API for web interfaces
- Streamlit UI provides user-friendly web access to all tools
Claude Desktop ←→ MCP Protocol ←→ Pure MCP Server ←→ Agent Registry ←→ Agents
↕
Web Browser ←→ HTTP API ←→ Simple MCP Host ←→ Agent Registry ←→ Agents
Quick Start
Prerequisites
- Python 3.11+
- Virtual environment support
Installation
git clone <repo-url>
cd mcp_server_full
# Create and activate virtual environment
python -m venv .venv
# Windows
.venv\Scripts\activate
# Linux/Mac
source .venv/bin/activate
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
Configuration
Create a .env
file with your API keys (all optional):
# OpenAI Agent (optional)
OPENAI_API_KEY=your_openai_api_key_here
# Ollama Agent (optional, uses local Ollama server)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
# File Agent (enabled by default, no config needed)
# Provides file reading, writing, and listing capabilities
Running the Server
For Claude Desktop (MCP Protocol)
# Start the pure MCP server for Claude Desktop
python run_mcp_server.py
Add to your Claude Desktop config (claude_desktop_config.json
):
{
"mcpServers": {
"agentic-mcp": {
"command": "python",
"args": ["run_mcp_server.py"],
"cwd": "d:\\AI Lab\\MCP research\\mcp_server_full"
}
}
}
For Web Interface (HTTP + Streamlit)
# Terminal 1: Start HTTP host for tools
python simple_mcp_host.py
# Terminal 2: Start Streamlit UI
streamlit run streamlit_app.py
Access the web interface at: http://localhost:8501
Testing Your Setup
# Test agent registration and tool availability
python test_quick.py
# Test specific agents
python test_both.py
# Validate server functionality
python validate_server.py
Available Agents & Tools
🤖 OpenAI Agent
Status: Available with API key
Tools:
openai_chat
: Chat completion with GPT modelsopenai_analysis
: Text analysis and insights
Setup: Add OPENAI_API_KEY
to .env
file
🦙 Ollama Agent
Status: Available with local Ollama server
Tools:
ollama_chat
: Chat with local Ollama modelsollama_generate
: Text generation
Setup: Install and run Ollama locally, configure OLLAMA_BASE_URL
and OLLAMA_MODEL
📁 File Agent
Status: Always available
Tools:
file_read
: Read file contentsfile_write
: Write content to filesfile_list
: List directory contents
Setup: No configuration needed
API Usage
MCP Protocol (Claude Desktop)
Tools are automatically available in Claude Desktop once the server is configured. Ask Claude to:
- "Read the contents of file.txt"
- "Generate text using Ollama"
- "Analyze this text with OpenAI"
HTTP API (Web/Streamlit)
# List available tools
curl http://localhost:8000/tools
# Call a specific tool
curl -X POST http://localhost:8000/tools/call \
-H "Content-Type: application/json" \
-d '{
"tool_name": "file_read",
"arguments": {
"file_path": "example.txt"
}
}'
Architecture
Core Components
pure_mcp_server.py
: Main MCP JSON-RPC server for Claude Desktop integrationsimple_mcp_host.py
: HTTP wrapper that exposes MCP tools via REST APIregistry.py
: Dynamic agent and tool registration systemrun_mcp_server.py
: Entry point script for Claude Desktop configurationconfig.py
: Environment-based configuration managementprotocol.py
: MCP protocol models and types
Agents
agents/base.py
: Base agent interface that all agents implementagents/openai_agent.py
: OpenAI API integration agentagents/ollama_agent.py
: Local Ollama model integration agentagents/file_agent.py
: File system operations agent
User Interfaces
streamlit_app.py
: Modern web UI for interactive tool usage- Claude Desktop: Direct MCP protocol integration
Agent Registration Flow
# Each agent registers its tools dynamically
class YourAgent(BaseAgent):
def get_tools(self) -> Dict[str, Any]:
return {
"your_tool": {
"description": "What your tool does",
"inputSchema": {...}
}
}
async def handle_tool_call(self, tool_name: str, params: Dict[str, Any]) -> Any:
# Handle the tool call
pass
# Registry automatically discovers and routes tools
registry.register_agent("your_agent", YourAgent(config))
Development
Project Structure
mcp_server_full/
├── agents/ # Agent implementations
│ ├── base.py # Base agent interface
│ ├── openai_agent.py # OpenAI integration
│ ├── ollama_agent.py # Ollama integration
│ └── file_agent.py # File operations
├── pure_mcp_server.py # Main MCP server for Claude Desktop
├── simple_mcp_host.py # HTTP host for web interfaces
├── registry.py # Dynamic tool registration
├── run_mcp_server.py # Claude Desktop entry point
├── streamlit_app.py # Web UI
├── config.py # Configuration management
├── protocol.py # MCP protocol models
├── requirements.txt # Dependencies
├── .env # Environment variables (create this)
├── ADDING_NEW_AGENTS.md # Detailed agent development guide
└── README.md # This file
Adding New Agents
For a complete step-by-step guide on adding new agents, see ADDING_NEW_AGENTS.md.
Quick Overview:
- Create agent file in
agents/
inheriting fromBaseAgent
- Implement
get_tools()
andhandle_tool_call()
methods - Register agent in both
pure_mcp_server.py
andsimple_mcp_host.py
- Add configuration and test your agent
The guide includes complete code examples, best practices, and troubleshooting tips.
Adding New Tools
To add new tools to existing agents:
- Edit the agent's
get_tools()
method to define new tool schema - Add handler method in agent's
handle_tool_call()
method - Test the new tool functionality
- Update documentation
Example:
# In your agent
def get_tools(self):
return {
"new_tool": {
"description": "Description of new tool",
"inputSchema": {
"type": "object",
"properties": {
"param": {"type": "string", "description": "Parameter description"}
},
"required": ["param"]
}
}
}
async def handle_tool_call(self, tool_name: str, params: Dict[str, Any]) -> Any:
if tool_name == "new_tool":
return await self._handle_new_tool(params)
Troubleshooting
Common Issues
-
Agent Not Available: Check API keys and service connectivity
# Test agent registration python test_quick.py
-
Claude Desktop Not Connecting: Verify config path and entry point
# Check claude_desktop_config.json { "mcpServers": { "agentic-mcp": { "command": "python", "args": ["run_mcp_server.py"], "cwd": "d:\\AI Lab\\MCP research\\mcp_server_full" } } }
-
Streamlit UI Issues: Ensure HTTP host is running
# Start HTTP host first python simple_mcp_host.py # Then start Streamlit streamlit run streamlit_app.py
-
OpenAI Errors: Check API key and quota
# Test OpenAI directly python openai_test.py
-
Ollama Not Working: Verify Ollama server is running
# Check Ollama status curl http://localhost:11434/api/tags
Debug Mode
Enable detailed logging:
# Set environment variable
export LOG_LEVEL=DEBUG
python run_mcp_server.py
Health Checks
# Check HTTP API health
curl http://localhost:8000/health
# List registered tools
curl http://localhost:8000/tools
# Test tool call
curl -X POST http://localhost:8000/tools/call \
-H "Content-Type: application/json" \
-d '{"tool_name": "file_list", "arguments": {"directory_path": "."}}'
Dependencies
Core Runtime
- pydantic: Configuration and data validation
- asyncio: Async operation support
- httpx: HTTP client for external APIs
- aiofiles: Async file operations
Agent-Specific
- openai: OpenAI API client (for OpenAI agent)
- ollama: Ollama API client (for Ollama agent)
Web Interface
- streamlit: Modern web UI framework
- requests: HTTP requests for Streamlit
Development & Testing
- pytest: Testing framework
- logging: Debug and monitoring
All dependencies are automatically installed via requirements.txt
.
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature
- Add your agent following the agent development guide
- Test your changes:
python test_quick.py
- Submit a pull request
Agent Development Workflow
- Plan: Define what tools your agent will provide
- Implement: Create agent class inheriting from
BaseAgent
- Register: Add agent registration to both server files
- Test: Verify agent works in both MCP and HTTP modes
- Document: Update README and create usage examples
License
MIT
Streamlit Web Interface
The Streamlit app provides an intuitive web interface for all MCP tools.
Features
- 🔧 Real-time Tool Discovery: Automatically displays all available tools from registered agents
- 💬 Interactive Interface: Easy-to-use forms for tool parameters
- 📊 Response Display: Formatted display of tool results
- � Agent Status: Real-time monitoring of agent availability
- ⚙️ Configuration: Environment-based setup with clear status indicators
Usage
- Start the backend:
python simple_mcp_host.py
- Launch Streamlit:
streamlit run streamlit_app.py
- Open browser: Navigate to http://localhost:8501
- Select tools: Choose from available agent tools
- Execute: Fill parameters and run tools interactively
Tool Integration
The Streamlit UI automatically discovers and creates forms for any tools registered by agents, making it easy to test and use new functionality as agents are added.
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。