Revenue Intelligence MCP Server
Provides ML-powered revenue intelligence for sales and customer success teams, enabling lead scoring, churn risk detection, and conversion predictions with explainable feature attribution and production monitoring capabilities.
README
Revenue Intelligence MCP Server
A production-ready MCP server demonstrating ML system integration patterns for customer-facing business teams at scale. This server simulates a real-world ML-powered revenue intelligence platform, showcasing how to build observable, maintainable ML systems integrated with business workflows.
Business Context
Modern revenue teams (Sales, Customer Success, Marketing) need real-time ML insights to prioritize leads, prevent churn, and maximize conversions. This server demonstrates how to build production ML systems that:
- Integrate with business workflows via MCP resources, tools, and prompts
- Provide explainable predictions with feature attribution
- Enable monitoring and observability through prediction logging
- Support production ML patterns like versioning, drift detection, and health checks
This is the type of system you'd find powering revenue operations at high-growth SaaS companies, integrated with tools like Salesforce, HubSpot, or custom CRMs.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ MCP Server Interface │
│ (Resources, Tools, Prompts for Claude Desktop/API) │
└────────────────┬────────────────────────────────────────────┘
│
┌────────┴────────┐
│ │
┌───────▼──────┐ ┌──────▼────────┐ ┌──────────────┐
│ Scoring │ │ Data Store │ │ Config │
│ Engine │ │ (CRM Data) │ │ (Thresholds,│
│ │ │ │ │ Weights) │
│ • Lead Score │ │ • Accounts │ │ │
│ • Churn Risk │ │ • Leads │ │ • Model v1.2.3│
│ • Conversion │ │ • Pred Logs │ │ • Features │
└──────────────┘ └───────────────┘ └──────────────┘
Key Components:
- MCP Server (
server.py) - Exposes resources, tools, and prompts via MCP protocol - Scoring Engine (
scoring.py) - ML prediction logic with feature attribution - Data Store (
data_store.py) - In-memory data access layer (simulates DB/warehouse) - Configuration (
config.py) - Model parameters, thresholds, feature weights - Mock Data (
mock_data.py) - 20 accounts, 30 leads with realistic signals
Production ML Patterns Demonstrated
This server showcases essential production ML engineering patterns:
1. Model Versioning & Metadata Tracking
- Explicit model version (
v1.2.3) stamped on every prediction - Training date and performance metrics tracked
- Feature importance documented and accessible via MCP resource
2. Prediction Logging for Monitoring
- Every prediction logged with full input/output metadata
- Enables audit trails, debugging, and performance analysis
- Foundation for drift detection and model retraining pipelines
3. Feature Attribution for Explainability
- Each prediction includes feature-level attributions
- Shows which signals drove the score (e.g., "demo requested" contributed 20%)
- Critical for revenue team trust and regulatory compliance
4. Drift Detection Framework
- Health check tool monitors prediction volume and distribution
- Alerts when patterns deviate from training baseline
- Enables proactive model retraining before degradation
5. Integration with Business Systems
- Resources expose CRM data (accounts, leads) via standard URIs
- Tools map to revenue team workflows (score lead, detect churn)
- Prompts provide templates for common analysis tasks
6. Health Monitoring and SLOs
check_model_healthtool provides real-time system status- Tracks uptime, prediction volume, accuracy, drift status
- Foundation for SLA monitoring and incident response
7. Structured Error Handling
- Comprehensive logging with structured context
- Graceful degradation for missing data
- Clear error messages for troubleshooting
Installation
Prerequisites
- Python 3.10+
- pip or uv for package management
Setup
# Clone or navigate to the project
cd revenue-intel-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -e ".[dev]"
# Run tests to verify installation
pytest tests/ -v
Usage
Running the Server
With Claude Desktop
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"revenue-intel": {
"command": "python",
"args": [
"C:/Users/User/git-repo/revenue-intel-mcp/server.py"
]
}
}
}
Restart Claude Desktop and the server will be available.
Standalone Testing
# Run the server directly (for testing with MCP inspector)
python server.py
Available Resources
Access CRM data and model metadata:
-
crm://accounts/{account_id}- Get account detailsExample: crm://accounts/acc_001 Returns: Account data with usage signals, MRR, plan tier -
crm://accounts/list- List all accountsReturns: Array of all 20 sample accounts -
crm://leads/{lead_id}- Get lead detailsExample: crm://leads/lead_001 Returns: Lead data with engagement signals, company info -
models://lead_scorer/metadata- Model metadataReturns: Version, training date, performance metrics, feature importance, drift status
Available Tools
Execute ML predictions and monitoring:
1. score_lead
Score a lead based on company attributes and engagement signals.
{
"company_name": "Acme Corp",
"signals": {
"website_visits_30d": 45,
"demo_requested": true,
"whitepaper_downloads": 3,
"email_engagement_score": 85,
"linkedin_engagement": true,
"free_trial_started": true
},
"industry": "technology",
"employee_count": 500
}
Returns: Score (0-100), tier (hot/warm/cold), feature attributions, explanation
2. get_conversion_insights
Predict trial-to-paid conversion probability.
{
"account_id": "acc_002"
}
Returns: Conversion probability, engagement signals, recommended actions
3. detect_churn_risk
Analyze account health and identify churn risk.
{
"account_id": "acc_006"
}
Returns: Risk score, risk tier, declining signals, intervention suggestions
4. check_model_health
Monitor ML system health and performance.
{}
Returns: Model version, uptime, prediction count, drift status, accuracy
5. log_prediction
Manually log a prediction for monitoring.
{
"prediction_data": {
"prediction_type": "lead_score",
"input_data": {...},
"prediction_result": {...}
}
}
Returns: Log ID, timestamp, success status
Available Prompts
Pre-built templates for common workflows:
-
analyze-account-expansion- CS team upsell analysis- Argument:
account_id - Use case: Assess account readiness for tier upgrade
- Argument:
-
weekly-lead-report- Sales leadership pipeline report- Argument:
week_number(optional) - Use case: Weekly lead quality and velocity analysis
- Argument:
-
explain-low-score- Lead score explanation- Argument:
lead_id - Use case: Understand why a lead scored poorly and how to improve
- Argument:
Example Prompts to Try
Once connected to Claude Desktop, try these:
Lead Scoring
"Score this lead for me: Acme Corp, technology industry, 500 employees. They've visited our site 50 times, requested a demo, downloaded 3 whitepapers, have an email engagement score of 90, engaged on LinkedIn, and started a free trial."
Churn Detection
"Check the churn risk for account acc_006"
Conversion Analysis
"What's the conversion probability for trial account acc_002? What should we do to increase it?"
Model Health
"Check the health of the lead scoring model"
Data Exploration
"Show me all the trial accounts and analyze which ones are most likely to convert"
Structured Analysis
"Use the analyze-account-expansion prompt for account acc_001"
Testing
Run the comprehensive test suite:
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_scoring.py -v
# Run with coverage
pytest tests/ --cov=. --cov-report=html
Test Coverage:
- ✅ Lead scoring (hot/warm/cold tiers)
- ✅ Churn risk detection
- ✅ Conversion probability calculation
- ✅ Feature attribution generation
- ✅ Prediction logging
- ✅ Data access layer
- ✅ Edge cases (missing data, invalid inputs)
- ✅ Mock data integrity
Project Structure
revenue-intel-mcp/
├── server.py # MCP server with resources, tools, prompts
├── scoring.py # ML scoring logic (lead score, churn, conversion)
├── models.py # Data models with type hints (Account, Lead, etc.)
├── data_store.py # Data access layer (get/store operations)
├── mock_data.py # Sample accounts, leads, prediction logs
├── config.py # Model config, thresholds, feature weights
├── tests/
│ ├── __init__.py
│ ├── test_scoring.py # Scoring logic tests
│ └── test_tools.py # Data access and integration tests
├── pyproject.toml # Python package configuration
├── .gitignore
└── README.md
Configuration
Key configuration in config.py:
- Model Version:
v1.2.3 - Lead Tier Thresholds: Hot (≥70), Warm (40-70), Cold (<40)
- Feature Weights: Company size (20%), Engagement (40%), Industry (20%), Intent (20%)
- Industry Fit Scores: Technology (90), SaaS (85), Finance (80), etc.
- Churn Risk Thresholds: Critical (≥70), High (50-70), Medium (30-50), Low (<30)
Sample Data
20 Accounts across industries:
- 3 trial accounts (exploring product)
- 3 at-risk accounts (declining usage)
- 14 active accounts (various tiers: starter, professional, enterprise)
30 Leads with varying quality:
- Hot leads: High engagement, demo requested, enterprise size
- Warm leads: Moderate engagement, mid-market
- Cold leads: Low engagement, small companies
Production Deployment Notes
This demo uses in-memory storage. For production deployment:
Data Layer
- Replace
mock_data.pywith connections to:- Snowflake/BigQuery for historical data and feature store
- PostgreSQL/MySQL for operational CRM data
- Redis for real-time feature caching
Model Serving
- Deploy scoring logic as:
- FastAPI/Flask service for REST API
- AWS Lambda/Cloud Functions for serverless
- SageMaker/Vertex AI for managed ML serving
Monitoring
- Implement production monitoring:
- Datadog/New Relic for application metrics
- MLflow/Weights & Biases for ML experiment tracking
- Grafana/Kibana for prediction drift dashboards
- PagerDuty for alert routing
MLOps Pipeline
- Establish model lifecycle management:
- Feature pipelines (dbt, Airflow) for data freshness
- Training pipelines with version control (Git, DVC)
- A/B testing framework for model evaluation
- Automated retraining based on drift detection
- Shadow deployments for validation before rollout
Data Quality
- Add comprehensive data validation:
- Great Expectations for input data quality checks
- Schema evolution handling with Pydantic
- Feature drift monitoring against training distributions
Security & Compliance
- Implement security controls:
- Authentication/authorization for API access
- PII handling and data anonymization
- Audit logging for regulatory compliance (GDPR, SOC2)
- Rate limiting and DDoS protection
License
MIT License - feel free to use as a template for your own ML systems.
Contributing
This is a demonstration project. For production use, adapt the patterns to your specific:
- Data infrastructure (warehouse, feature store, CRM)
- ML frameworks (scikit-learn, XGBoost, PyTorch)
- Deployment environment (cloud provider, Kubernetes, serverless)
- Monitoring and observability stack
Built with: Python 3.10+ | MCP SDK | Type hints | Structured logging | pytest
Demonstrates: Production ML patterns | Business integration | Observability | Explainability
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。