HistGradientBoostingClassifier MCP Server
Provides tools for training, managing, and making predictions with scikit-learn's HistGradientBoostingClassifier models. It allows users to handle model lifecycles, including creation, evaluation, and serialization, via the Model Context Protocol.
README
HistGradientBoostingClassifier MCP Server
A Model Context Protocol (MCP) server that provides tools for training, predicting, and managing sklearn's HistGradientBoostingClassifier models.
Features
This MCP server exposes the following tools:
- create_classifier: Create a new HistGradientBoostingClassifier with custom parameters
- train_model: Train a classifier on provided data
- predict: Make class predictions on new data
- predict_proba: Get class probabilities for predictions
- score_model: Evaluate model accuracy on test data
- get_model_info: Get detailed information about a model
- list_models: List all available models
- delete_model: Remove a model from memory
- save_model: Serialize a model to base64 string
- load_model: Load a model from serialized string
Installation
pip install -r requirements.txt
Local Development
Run the server locally:
uvrun --with mcp server.py
The server will start on http://localhost:8000 by default.
Railway Deployment
Prerequisites
- A Railway account (sign up at https://railway.app)
- Railway CLI installed (optional, can use web interface)
- Git repository with your code (GitHub, GitLab, or Bitbucket)
Deploy via Railway Web Interface
- Go to https://railway.app and create a new project
- Click "New Project" → "Deploy from GitHub repo"
- Select your repository containing this MCP server
- Railway will automatically detect the Python project and use the
Procfile - The server will be deployed and you'll get a public URL (e.g.,
https://your-app.railway.app)
Deploy via Railway CLI
# Install Railway CLI
npm i -g @railway/cli
# Login to Railway
railway login
# Initialize project (in your project directory)
railway init
# Link to existing project or create new one
railway link
# Deploy
railway up
Environment Variables
No environment variables are required for basic operation. Railway automatically provides:
PORT: The port your application should listen on- The server automatically binds to
0.0.0.0to accept external connections
Verifying Deployment
Once deployed, your MCP server will be available at your Railway URL. You can test it by:
- Visiting
https://your-app.railway.appin a browser (should show MCP server info or 404, which is normal) - Using the MCP Inspector:
npx -y @modelcontextprotocol/inspectorand connecting to your Railway URL - Connecting from an MCP client using the streamable-http transport
Current Deployment URL: https://web-production-a620a.up.railway.app
Usage
Once deployed, the MCP server will be accessible at your Railway URL. You can connect to it using any MCP-compatible client.
Example: Using with Claude Desktop
Add to your Claude Desktop MCP configuration (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):
{
"mcpServers": {
"histgradientboosting": {
"url": "https://web-production-a620a.up.railway.app",
"transport": "streamable-http"
}
}
}
Example API Calls
The server exposes tools that can be called via MCP protocol. Here's what each tool does:
Create a classifier:
create_classifier(
model_id="my_model",
learning_rate=0.1,
max_iter=100,
max_leaf_nodes=31
)
Train the model:
train_model(
model_id="my_model",
X=[[1, 2], [3, 4], [5, 6]],
y=[0, 1, 0]
)
Make predictions:
predict(
model_id="my_model",
X=[[2, 3], [4, 5]]
)
Get probabilities:
predict_proba(
model_id="my_model",
X=[[2, 3], [4, 5]]
)
Model Storage
Currently, models are stored in-memory. This means:
- Models persist only during the server's lifetime
- Restarting the server will lose all models
- For production use, consider implementing persistent storage (database, file system, or cloud storage)
API Reference
HistGradientBoostingClassifier Parameters
All standard sklearn HistGradientBoostingClassifier parameters are supported:
loss: Loss function (default: 'log_loss')learning_rate: Learning rate/shrinkage (default: 0.1)max_iter: Maximum boosting iterations (default: 100)max_leaf_nodes: Maximum leaves per tree (default: 31)max_depth: Maximum tree depth (default: None)min_samples_leaf: Minimum samples per leaf (default: 20)l2_regularization: L2 regularization (default: 0.0)max_features: Feature subsampling proportion (default: 1.0)max_bins: Maximum histogram bins (default: 255)early_stopping: Enable early stopping (default: 'auto')validation_fraction: Validation set fraction (default: 0.1)n_iter_no_change: Early stopping patience (default: 10)random_state: Random seed (default: None)verbose: Verbosity level (default: 0)
See the sklearn documentation for detailed parameter descriptions.
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。