SensorMCP Server
An MCP server that enables automated dataset creation and custom object detection model training through natural language interactions. It integrates foundation models like GroundedSAM for auto-labeling and supports training specialized YOLOv8 models using local or Unsplash images.
README
SensorMCP Server
A SensorMCP Model Context Protocol (MCP) Server that enables automated dataset creation and custom object detection model training through natural language interactions. This project integrates computer vision capabilities with Large Language Models using the MCP standard.
🌟 About
SensorMCP Server combines the power of foundation models (like GroundedSAM) with custom model training (YOLOv8) to create a seamless workflow for object detection. Using the Model Context Protocol, it enables LLMs to:
- Automatically label images using foundation models
- Create custom object detection datasets
- Train specialized detection models
- Download images from Unsplash for training data
[!NOTE] The Model Context Protocol (MCP) enables seamless integration between LLMs and external tools, making this ideal for AI-powered computer vision workflows.
✨ Features
- Foundation Model Integration: Uses GroundedSAM for automatic image labeling
- Custom Model Training: Fine-tune YOLOv8 models on your specific objects
- Image Data Management: Download images from Unsplash or import local images
- Ontology Definition: Define custom object classes through natural language
- MCP Protocol: Native integration with LLM workflows and chat interfaces
- Fixed Data Structure: Organized directory layout for reproducible workflows
🛠️ Installation
Prerequisites
- uv for package management
- Python 3.13+ (
uv python install 3.13) - CUDA-compatible GPU (recommended for training)
Setup
- Clone the repository:
git clone <repository-url>
cd sensor-mcp
- Install dependencies:
uv sync
- Set up environment variables (create
.envfile):
UNSPLASH_API_KEY=your_unsplash_api_key_here
🚀 Usage
Running the MCP Server
For MCP integration (recommended):
uv run src/zoo_mcp.py
For standalone web server:
uv run src/server.py
MCP Configuration
Add to your MCP client configuration:
{
"mcpServers": {
"sensormcp-server": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
"/path/to/sensor-mcp",
"run",
"src/zoo_mcp.py"
]
}
}
}
Available MCP Tools
- list_available_models() - View supported base and target models
- define_ontology(objects_list) - Define object classes to detect
- set_base_model(model_name) - Initialize foundation model for labeling
- set_target_model(model_name) - Initialize target model for training
- fetch_unsplash_images(query, max_images) - Download training images
- import_images_from_folder(folder_path) - Import local images
- label_images() - Auto-label images using the base model
- train_model(epochs, device) - Train custom detection model
Example Workflow
Through your MCP-enabled LLM interface:
-
Define what to detect:
Define ontology for "tiger, elephant, zebra" -
Set up models:
Set base model to grounded_sam Set target model to yolov8n.pt -
Get training data:
Fetch 50 images from Unsplash for "wildlife animals" -
Create dataset:
Label all images using the base model -
Train custom model:
Train model for 100 epochs on device 0
📁 Project Structure
sensor-mcp/
├── src/
│ ├── server.py # Main MCP server implementation
│ ├── zoo_mcp.py # MCP entry point
│ ├── models.py # Model management and training
│ ├── image_utils.py # Image processing and Unsplash API
│ ├── state.py # Application state management
│ └── data/ # Created automatically
│ ├── raw_images/ # Original/unlabeled images
│ ├── labeled_images/# Auto-labeled datasets
│ └── models/ # Trained model weights
├── static/ # Web interface assets
└── index.html # Web interface template
🔧 Supported Models
Base Models (for auto-labeling)
- GroundedSAM: Foundation model for object detection and segmentation
Target Models (for training)
- YOLOv8n.pt: Nano - fastest inference
- YOLOv8s.pt: Small - balanced speed/accuracy
- YOLOv8m.pt: Medium - higher accuracy
- YOLOv8l.pt: Large - high accuracy
- YOLOv8x.pt: Extra Large - highest accuracy
🌐 API Integration
Unsplash API
To use image download functionality:
- Create an account at Unsplash Developers
- Create a new application
- Add your access key to the
.envfile
🛠️ Development
Running Tests
uv run pytest
Code Formatting
uv run black src/
📋 Requirements
See pyproject.toml for full dependency list. Key dependencies:
mcp[cli]- Model Context Protocolautodistill- Foundation model integrationtorch&torchvision- Deep learning frameworkultralytics- YOLOv8 implementation
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
📖 Citation
If you use this code or data in your research, please cite our paper:
@inproceedings{Guo2025,
author = {Guo, Yunqi and Zhu, Guanyu and Liu, Kaiwei and Xing, Guoliang},
title = {A Model Context Protocol Server for Custom Sensor Tool Creation},
booktitle = {3rd International Workshop on Networked AI Systems (NetAISys '25)},
year = {2025},
month = {jun},
address = {Anaheim, CA, USA},
publisher = {ACM},
doi = {10.1145/3711875.3736687},
isbn = {979-8-4007-1453-5/25/06}
}
📄 License
This project is licensed under the MIT License.
📧 Contact
For questions about the zoo dataset mentioned in development: Email: yq@anysign.net
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。