Deep Research Agent MCP Server
A LangGraph-powered research agent that performs iterative web searches using Google Search and Gemini models to generate structured reports with citations. It integrates with MCP-compatible clients like Claude and Cursor to enable sophisticated, multi-step AI research workflows.
README
Deep Research Agent MCP Server
🔍 Intelligent AI Research Agent - A sophisticated LangGraph-powered research agent wrapped as a Model Context Protocol (MCP) server for seamless integration with AI assistants like Claude, Cursor, and other MCP-compatible clients.
✨ Features
Advanced Research Capabilities
- Multi-Step Research: Conducts iterative web research with reflection and refinement loops
- Google Search Integration: Uses Google Search API with advanced grounding metadata
- AI-Powered Analysis: Leverages multiple Gemini models (2.0 Flash, 2.5 Flash, 2.5 Pro) for different tasks
- Comprehensive Reports: Generates structured research reports with proper citations and source verification
- Configurable Depth: Customizable research loops and query generation parameters
MCP Server Integration
- FastMCP Server: Built on FastMCP for seamless MCP protocol support
- Real-time Streaming: Progress updates streamed to clients during research execution
- HTTP Transport: Accessible via HTTP for remote deployment and integration
- Health Monitoring: Built-in health checks and statistics endpoints
- Error Handling: Robust error handling with detailed logging
Deployment Ready
- Docker Support: Containerized for easy deployment
- Render Integration: One-click deployment to Render platform
- Environment Configuration: Flexible configuration via environment variables
- Scalable Architecture: Designed for concurrent research requests
Architecture
Research Agent Workflow
graph TD
A[Research Topic Input] --> B[Query Generation]
B --> C[Web Research]
C --> D[Content Analysis]
D --> E[Reflection & Gap Analysis]
E --> F{Research Complete?}
F -->|No| G[Generate Follow-up Queries]
G --> C
F -->|Yes| H[Final Report Generation]
H --> I[Structured Output with Citations]
subgraph "AI Models Used"
J[Gemini 2.0 Flash<br/>Query Generation]
K[Gemini 2.0 Flash<br/>Web Research]
L[Gemini 2.5 Flash<br/>Reflection]
M[Gemini 2.5 Pro<br/>Final Report]
end
B -.-> J
C -.-> K
E -.-> L
H -.-> M
MCP Server Architecture
graph TB
subgraph "Client Applications"
A1[Claude Desktop]
A2[Cursor IDE]
A3[Custom MCP Client]
end
subgraph "MCP Server (FastMCP)"
B1[HTTP Transport Layer]
B2[Research Tool Handler]
B3[Progress Streaming]
B4[Health & Stats Endpoints]
end
subgraph "LangGraph Research Agent"
C1[Query Generation Node]
C2[Web Research Node]
C3[Reflection Node]
C4[Final Answer Node]
end
subgraph "External Services"
D1[Google Search API]
D2[Gemini AI Models]
end
A1 --> B1
A2 --> B1
A3 --> B1
B1 --> B2
B2 --> B3
B2 --> C1
C1 --> C2
C2 --> C3
C3 --> C4
C2 --> D1
C1 --> D2
C3 --> D2
C4 --> D2
Deployment Architecture
graph TB
subgraph "Development"
A1[Local Development]
A2[Docker Compose]
end
subgraph "Production Deployment"
B1[Render Platform]
B2[Docker Container]
B3[Custom Cloud Deploy]
end
subgraph "MCP Server Container"
C1[FastMCP HTTP Server]
C2[LangGraph Agent]
C3[Health Monitoring]
C4[Environment Config]
end
A1 --> C1
A2 --> C1
B1 --> C1
B2 --> C1
B3 --> C1
🚀 Quick Start
1. Render Deployment (Recommended)
Deploy to Render in 5 minutes:
-
Fork this repository to your GitHub account
-
Create Render account at render.com
-
Deploy service:
- Click "New +" → "Web Service"
- Connect your GitHub repository
- Configure settings:
Name: deep-research-mcp-server Runtime: Python 3 Build Command: pip install -r requirements.txt Start Command: python -m src.mcp_server.server
-
Add environment variables:
GEMINI_API_KEY = your_gemini_api_key_here PORT = 8000 -
Deploy and get your server URL:
https://your-service-name.onrender.com
2. Local Development
# Clone repository
git clone https://github.com/your-username/deep-research-mcp.git
cd deep-research-mcp
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export GEMINI_API_KEY=your_gemini_api_key_here
# Run MCP server
python -m src.mcp_server.server
3. Docker Deployment
# Build Docker image
docker build -t deep-research-mcp .
# Run container
docker run -p 8000:8000 \
-e GEMINI_API_KEY=your_gemini_api_key \
deep-research-mcp
🔧 Configuration
Environment Variables
| Variable | Description | Default | Required |
|---|---|---|---|
GEMINI_API_KEY |
Google Gemini API key | - | ✅ |
PORT |
Server port | 8000 | ❌ |
HOST |
Server host | 0.0.0.0 | ❌ |
LOG_LEVEL |
Logging level | info | ❌ |
Research Parameters
Configure research behavior through the MCP tool parameters:
{
"topic": "Your research question",
"max_research_loops": 2,
"initial_search_query_count": 3,
"reasoning_model": "gemini-2.5-pro"
}
📖 Usage
With Claude Desktop
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"deep-research": {
"url": "https://your-service-name.onrender.com/mcp/"
}
}
}
With Cursor IDE
Add to Cursor settings → MCP Servers:
{
"mcpServers": {
"deep-research": {
"url": "https://your-service-name.onrender.com/mcp/"
}
}
}
Python Client Example
from fastmcp import Client
import asyncio
async def research_example():
client = Client("http://localhost:8000/mcp/")
async with client:
result = await client.call_tool("research", {
"topic": "Latest developments in quantum computing",
"max_research_loops": 3,
"initial_search_query_count": 4
})
print("Research Report:")
print(result["report"])
print(f"\nSources: {len(result['sources'])}")
print(f"Execution time: {result['metadata']['execution_time']:.2f}s")
asyncio.run(research_example())
🛠️ Development
Project Structure
deep-research-mcp/
├── src/
│ ├── agent/ # LangGraph research agent
│ │ ├── app.py # FastAPI app
│ │ ├── graph.py # LangGraph workflow definition
│ │ ├── state.py # State management
│ │ ├── prompts.py # AI prompts
│ │ ├── tools_and_schemas.py # Tools and data schemas
│ │ ├── configuration.py # Agent configuration
│ │ └── utils.py # Utility functions
│ └── mcp_server/ # MCP server implementation
│ ├── server.py # FastMCP server
│ ├── agent_adapter.py # Agent wrapper
│ ├── config.py # Configuration management
│ └── utils.py # Server utilities
├── ClinicalTrials-MCP-Server/ # Additional MCP server example
├── examples/ # Usage examples
├── requirements.txt # Python dependencies
├── pyproject.toml # Project configuration
├── render.yaml # Render deployment config
└── README.md # This file
Local Testing
# Install development dependencies
pip install -r requirements.txt
# Run tests
python -m pytest tests/
# Start server in development mode
python -m src.mcp_server.server
# Test health endpoint
curl http://localhost:8000/health
# Test MCP endpoint
curl -X POST http://localhost:8000/mcp/ \
-H "Content-Type: application/json" \
-d '{"method": "tools/list", "params": {}}'
📊 Monitoring
Health Check
curl https://your-service-name.onrender.com/health
Response:
{
"status": "healthy",
"service": "Deep Research MCP Server",
"version": "1.0.0",
"agent_status": "healthy"
}
Statistics
curl https://your-service-name.onrender.com/stats
Logging
The server provides structured logging with:
- Request/response tracking
- Research progress updates
- Error reporting and debugging
- Performance metrics
🔒 Security
- API Key Protection: Environment variable-based secret management
- Input Validation: Comprehensive input sanitization
- Rate Limiting: Built-in request throttling
- Error Handling: Secure error responses without sensitive data exposure
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。