Observability MCP Server
Provides comprehensive monitoring and observability for MCP server ecosystems with real-time health checks, performance metrics, distributed tracing, anomaly detection, and automated performance reports using OpenTelemetry and Prometheus.
README
Observability MCP Server
FastMCP 2.14.1-powered observability server for monitoring MCP ecosystems
A comprehensive observability server built on FastMCP 2.14.1 that leverages OpenTelemetry integration, persistent storage, and advanced monitoring capabilities to provide production-grade observability for MCP server ecosystems.
🚀 Features
FastMCP 2.14.1 Integration
- ✅ OpenTelemetry Integration - Distributed tracing and metrics collection
- ✅ Enhanced Storage Backend - Persistent metrics and historical data
- ✅ Production-Ready - Built for high-performance monitoring
Comprehensive Monitoring
- 🔍 Real-time Health Checks - Monitor MCP server availability and response times
- 📊 Performance Metrics - CPU, memory, disk, and network monitoring
- 🔗 Distributed Tracing - Track interactions across MCP server ecosystems
- 🚨 Intelligent Alerting - Anomaly detection and automated alerts
- 📈 Performance Reports - Automated analysis and optimization recommendations
Advanced Analytics
- 🔬 Usage Pattern Analysis - Understand how MCP servers are being used
- 📉 Trend Detection - Identify performance trends and bottlenecks
- 🎯 Optimization Insights - Data-driven recommendations for improvement
- 📤 Multi-Format Export - Prometheus, OpenTelemetry, and JSON export
🛠️ Installation
Prerequisites
- Python 3.11+
- FastMCP 2.14.1+ (automatically installed)
Install from Source
git clone https://github.com/sandraschi/observability-mcp
cd observability-mcp
pip install -e .
Docker Installation
docker build -t observability-mcp .
docker run -p 9090:9090 observability-mcp
🚀 Quick Start
1. Start the Server
# Using the CLI
observability-mcp run
# Or directly with Python
python -m observability_mcp.server
2. Verify Installation
# Check server health
observability-mcp health
# View available metrics
observability-mcp metrics
3. Configure Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"observability": {
"command": "observability-mcp",
"args": ["run"]
}
}
}
📊 Available Tools
🔍 Health Monitoring
monitor_server_health- Real-time health checks with OpenTelemetry metricsmonitor_system_resources- Comprehensive system resource monitoring
📈 Performance Analysis
collect_performance_metrics- CPU, memory, disk, and network metricsgenerate_performance_reports- Automated performance analysis and recommendationsanalyze_mcp_interactions- Usage pattern analysis and optimization insights
🚨 Alerting & Anomaly Detection
alert_on_anomalies- Intelligent anomaly detection and alertingtrace_mcp_calls- Distributed tracing for MCP server interactions
📤 Data Export
export_metrics- Export metrics in Prometheus, OpenTelemetry, or JSON formats
🔧 Configuration
Environment Variables
# Prometheus metrics server port
PROMETHEUS_PORT=9090
# OpenTelemetry service name
OTEL_SERVICE_NAME=observability-mcp
# OTLP exporter endpoint (optional)
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# Metrics retention period (days)
METRICS_RETENTION_DAYS=30
Alert Configuration
The server comes with pre-configured alerts for common issues:
- CPU Usage > 90% (Warning)
- Memory Usage > 1GB (Error)
- Error Rate > 5% (Error)
Alerts are stored persistently and can be customized through the MCP tools.
📈 Monitoring Dashboard
Prometheus Metrics
Access metrics at: http://localhost:9090/metrics
Available metrics:
# Health checks
mcp_health_checks_total{status="healthy|degraded|unhealthy", service="..."} 1
# Performance metrics
mcp_performance_metrics_collected{service="..."} 1
# System resources
mcp_cpu_usage_percent{} 45.2
mcp_memory_usage_mb{} 1024.5
# Traces and alerts
mcp_traces_created{service="...", operation="..."} 1
mcp_alerts_triggered{type="active|anomaly"} 1
Integration with Grafana
- Add Prometheus as a data source in Grafana
- Import the provided dashboard JSON
- Visualize your MCP ecosystem's health and performance
🏗️ Architecture
FastMCP 2.14.1 Features Leveraged
OpenTelemetry Integration
- Distributed Tracing: Track requests across multiple MCP servers
- Metrics Collection: Structured performance data collection
- Context Propagation: Maintain context across service boundaries
Enhanced Persistent Storage
- Historical Data: Store metrics and traces for trend analysis
- Cross-Session Persistence: Data survives server restarts
- Efficient Storage: Optimized for time-series data
Production Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ MCP Servers │───▶│ Observability │───▶│ Prometheus │
│ (Monitored) │ │ MCP Server │ │ Metrics │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │
▼ ▼
┌──────────────────┐ ┌─────────────────┐
│ Persistent │ │ Grafana │
│ Storage │ │ Dashboard │
└──────────────────┘ └─────────────────┘
📚 Usage Examples
Health Monitoring
# Check MCP server health
result = await monitor_server_health(
service_url="http://localhost:8000/health",
timeout_seconds=5.0
)
print(f"Status: {result['health_check']['status']}")
Performance Analysis
# Collect system metrics
metrics = await collect_performance_metrics(service_name="my-mcp-server")
print(f"CPU: {metrics['metrics']['cpu_percent']}%")
print(f"Memory: {metrics['metrics']['memory_mb']} MB")
Distributed Tracing
# Record a trace
trace = await trace_mcp_calls(
operation_name="process_document",
service_name="ocr-mcp",
duration_ms=150.5,
attributes={"file_size": "2.3MB", "format": "PDF"}
)
Generate Reports
# Create performance report
report = await generate_performance_reports(
service_name="web-mcp",
days=7
)
print("Performance Summary:", report['summary'])
print("Recommendations:", report['recommendations'])
🔧 Development
Running Tests
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov=observability_mcp --cov-report=html
Code Quality
# Format code
black src/
# Lint code
ruff check src/
# Type checking
mypy src/
Docker Development
# Build development image
docker build -t observability-mcp:dev -f Dockerfile.dev .
# Run with hot reload
docker run -p 9090:9090 -v $(pwd):/app observability-mcp:dev
📊 Performance Benchmarks
FastMCP 2.14.1 Benefits
- OpenTelemetry Overhead: <1ms per trace
- Storage Performance: 1000+ metrics/second
- Memory Usage: 50MB baseline + 10MB per monitored service
- Concurrent Monitoring: 100+ services simultaneously
Recommended Hardware
- CPU: 2+ cores for metrics processing
- RAM: 2GB minimum, 4GB recommended
- Storage: 10GB for metrics history (30 days retention)
🚨 Troubleshooting
Common Issues
Server Won't Start
# Check Python version
python --version # Should be 3.11+
# Check FastMCP installation
pip show fastmcp # Should be 2.14.1+
# Check dependencies
pip check
Metrics Not Appearing
# Check Prometheus endpoint
curl http://localhost:9090/metrics
# Verify OpenTelemetry configuration
observability-mcp metrics
High Memory Usage
- Reduce
METRICS_RETENTION_DAYS - Implement metric aggregation
- Monitor with
monitor_system_resources
Storage Issues
- Check available disk space
- Clean old metrics:
rm -rf ~/.observability-mcp/metrics/* - Restart server to recreate storage
🤝 Contributing
Development Setup
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
Code Standards
- FastMCP 2.14.1+: Use latest features and patterns
- OpenTelemetry: Follow OTEL best practices
- Async First: All operations should be async
- Type Hints: Full type coverage required
- Documentation: Comprehensive docstrings
Testing Strategy
- Unit Tests: Core functionality
- Integration Tests: MCP server interactions
- Performance Tests: Benchmarking and load testing
- Chaos Tests: Failure scenario testing
📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- FastMCP Team - For the amazing 2.14.1 framework with OpenTelemetry integration
- OpenTelemetry Community - For the observability standards and tools
- Prometheus Team - For the metrics collection and alerting system
🔗 Related Projects
- FastMCP - The framework this server is built on
- OpenTelemetry Python - Observability instrumentation
- Prometheus - Metrics collection and alerting
- Grafana - Visualization and dashboards
Built with ❤️ using FastMCP 2.14.1 and OpenTelemetry
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。