Text Classification MCP Server (Model2Vec)
Enables fast text classification using Model2Vec static embeddings with 10 default categories (technology, business, health, etc.), supports custom category management, batch processing, and provides both local and remote deployment options.
README
Text Classification MCP Server (Model2Vec)
A powerful Model Context Protocol (MCP) server that provides comprehensive text classification tools using fast static embeddings from Model2Vec (Minish Lab).
🛠️ Complete MCP Tools & Resources
This server provides 6 essential tools, 2 resources, and 1 prompt template for text classification:
🏷️ Classification Tools
classify_text- Classify single text with confidence scoresbatch_classify- Classify multiple texts simultaneously
📝 Category Management Tools
add_custom_category- Add individual custom categoriesbatch_add_custom_categories- Add multiple categories at oncelist_categories- View all available categoriesremove_categories- Remove unwanted categories
📊 Resources
categories://list- Access category list programmaticallymodel://info- Get model and system information
💬 Prompt Templates
classification_prompt- Ready-to-use classification prompt template
🚀 Key Features
- Multiple Transports: Supports stdio (local) and HTTP/SSE (remote) transports
- Fast Classification: Uses efficient static embeddings from Model2Vec
- 10 Default Categories: Technology, business, health, sports, entertainment, politics, science, education, travel, food
- Custom Categories: Add your own categories with descriptions
- Batch Processing: Classify multiple texts at once
- Resource Endpoints: Access category lists and model information
- Prompt Templates: Built-in prompts for classification tasks
- Production Ready: Docker, nginx, systemd support
📋 Installation
Prerequisites
- Python 3.10+
uvpackage manager (recommended) orpip
Quick Setup
# Install dependencies
pip install -r requirements.txt
# Or with uv
uv sync
🏃♂️ Running the Server
Option 1: Stdio Transport (Local/Traditional)
# Run with stdio (default - for Claude Desktop local config)
python text_classifier_server.py
# Or explicitly
python text_classifier_server.py --stdio
Option 2: HTTP Transport (Remote/Web)
# Run with HTTP transport on localhost:8000
python text_classifier_server.py --http
# Run on custom port
python text_classifier_server.py --http 9000
# Use the convenience script
./start_server.sh http 8000
Option 3: Using the HTTP Runner
# More options with the HTTP runner
python run_http_server.py --transport http --host 127.0.0.1 --port 8000 --debug
🔧 Configuration
For Claude Desktop
Stdio Transport (Local)
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"text-classifier": {
"command": "python",
"args": ["path/to/text_classifier_server.py"],
"env": {}
}
}
}
HTTP Transport (Remote)
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"text-classifier-http": {
"url": "http://localhost:8000/sse",
"env": {}
}
}
}
For VS Code
Add to .vscode/mcp.json:
{
"servers": {
"text-classifier": {
"type": "sse",
"url": "http://localhost:8000/sse",
"description": "Text classification server using static embeddings"
}
}
}
For Cursor IDE
Similar to Claude Desktop, but check Cursor's MCP documentation for the exact configuration path.
🛠️ Available Tools
classify_text
Classify a single text into predefined categories with confidence scores.
Parameters:
text(string): The text to classifytop_k(int, optional): Number of top categories to return (default: 3)
Returns: JSON with predictions, confidence scores, and category descriptions
Example:
classify_text("Apple announced new AI features", top_k=3)
batch_classify
Classify multiple texts simultaneously for efficient processing.
Parameters:
texts(list): List of texts to classifytop_k(int, optional): Number of top categories per text (default: 1)
Returns: JSON with batch classification results
Example:
batch_classify(["Tech news", "Sports update", "Business report"], top_k=2)
add_custom_category
Add a new custom category for classification.
Parameters:
category_name(string): Name of the new categorydescription(string): Description to generate the category embedding
Returns: JSON with operation result
Example:
add_custom_category("automotive", "Cars, vehicles, transportation, automotive industry")
batch_add_custom_categories
Add multiple custom categories in a single operation for efficiency.
Parameters:
categories_data(list): List of dictionaries with 'name' and 'description' keys
Returns: JSON with batch operation results
Example:
batch_add_custom_categories([
{"name": "automotive", "description": "Cars, vehicles, transportation"},
{"name": "music", "description": "Music, songs, artists, albums, concerts"}
])
list_categories
List all available categories and their descriptions.
Parameters: None
Returns: JSON with all categories and their descriptions
remove_categories
Remove one or multiple categories from the classification system.
Parameters:
category_names(list): List of category names to remove
Returns: JSON with removal results for each category
Example:
remove_categories(["automotive", "custom_category"])
📚 Available Resources
categories://list: Get list of available categories with metadatamodel://info: Get information about the loaded Model2Vec model and system status
💬 Available Prompts
classification_prompt: Template for text classification tasks with context and instructions
Parameters:
text(string): The text to classify
Returns: Formatted prompt for classification with available categories listed
🧪 Testing
Test HTTP Server
# Test the HTTP server endpoints
python test_http_client.py
# Check server status
./check_server.sh
# Test with curl
curl http://localhost:8000/sse
Test with MCP Inspector
# For stdio transport
mcp dev text_classifier_server.py
# For HTTP transport (start server first)
# Then connect MCP Inspector to http://localhost:8000/sse
🐳 Docker Deployment
Basic Docker
# Build and run
docker build -t text-classifier-mcp .
docker run -p 8000:8000 text-classifier-mcp
Docker Compose
# Basic deployment
docker-compose up
# With nginx reverse proxy
docker-compose --profile production up
🚀 Production Deployment
Systemd Service
# Copy service file
sudo cp text-classifier-mcp.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable text-classifier-mcp
sudo systemctl start text-classifier-mcp
Nginx Reverse Proxy
The included nginx.conf provides:
- HTTP/HTTPS termination
- Proper SSE headers
- Load balancing support
- SSL configuration template
🌐 Transport Comparison
| Feature | Stdio Transport | HTTP Transport |
|---|---|---|
| Use Case | Local integration | Remote/web access |
| Performance | Fastest | Very fast |
| Setup | Simple | Requires server |
| Scalability | One client | Multiple clients |
| Network | Local only | Network accessible |
| Security | Process isolation | HTTP-based auth |
| Debugging | MCP Inspector | HTTP tools + Inspector |
🔍 Troubleshooting
Common Issues
-
Server won't start
# Check if port is in use lsof -i :8000 # Try different port python run_http_server.py --port 9000 -
Claude Desktop connection fails
# Check server status ./check_server.sh # Verify config file syntax cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | python -m json.tool -
Model download fails
# Manual model download python -c "from model2vec import StaticModel; StaticModel.from_pretrained('minishlab/potion-base-8M')"
Debug Mode
# Enable debug logging
python run_http_server.py --debug
# Check logs
tail -f logs/mcp_server.log
📖 Technical Details
- Model:
minishlab/potion-base-8Mfrom Model2Vec - Similarity: Cosine similarity between text and category embeddings
- Performance: ~30MB model, fast inference with static embeddings
- Protocol: MCP specification 2024-11-05
- Transports: stdio, HTTP+SSE, Streamable HTTP
🤝 Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- Model2Vec by Minish Lab for fast static embeddings
- Anthropic for the Model Context Protocol specification
- FastMCP for the excellent Python MCP framework
Need help? Check the troubleshooting section or open an issue in the repository.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。