MCP Server Demo in python
Okay, here's a basic implementation of a Model Communication Protocol (MCP) server using Python over the network using Server-Sent Events (SSE) transport. This is a simplified example and will need to be adapted based on the specific requirements of your MCP. **Important Considerations:** * **Error Handling:** This example has minimal error handling. In a production environment, you'll need robust error handling to deal with network issues, invalid requests, and model errors. * **Security:** This example does *not* include any security measures (authentication, authorization, encryption). If you're dealing with sensitive data or untrusted clients, you *must* implement appropriate security. Consider using HTTPS and authentication mechanisms. * **Scalability:** This simple server is not designed for high concurrency. For production use, you'll likely need to use an asynchronous framework like `asyncio` or a more robust web server like Gunicorn or uWSGI with a framework like Flask or FastAPI. * **MCP Definition:** This code assumes a very basic MCP where the client sends a JSON payload and the server responds with a JSON payload. You'll need to adapt the `process_request` function to handle the specific commands and data formats defined by your MCP. * **SSE Library:** This example uses the `sse_starlette` library. Make sure you install it: `pip install sse_starlette` * **Starlette:** This example uses the `starlette` library. Make sure you install it: `pip install starlette uvicorn` ```python import json import time from sse_starlette.sse import EventSourceResponse from starlette.applications import Starlette from starlette.routing import Route from starlette.requests import Request from starlette.responses import JSONResponse, PlainTextResponse import asyncio # Dummy Model (Replace with your actual model) def dummy_model(input_data): """ A placeholder for your actual model processing. Simulates some processing time. """ print(f"Processing input: {input_data}") time.sleep(1) # Simulate processing result = {"output": f"Model processed: {input_data}"} return result async def process_request(data): """ Processes the incoming request according to the MCP. This is where you'd handle different MCP commands. """ try: # Assuming the data is a JSON object input_data = data # Call the model model_output = dummy_model(input_data) return model_output except Exception as e: print(f"Error processing request: {e}") return {"error": str(e)} async def sse_stream(request: Request): """ Handles the SSE stream. Listens for client requests and sends back model outputs. """ async def event_generator(): try: while True: if await request.is_disconnected(): print("Client disconnected") break try: # Simulate receiving data (replace with actual data source) # In a real application, you'd get data from a queue, database, etc. # For this example, we'll just use a simple counter. # data = {"input": f"Request at {time.time()}"} data = await request.json() # Get data from the request body # Process the request using the MCP result = await process_request(data) # Send the result as an SSE event yield { "event": "message", # You can define different event types "data": json.dumps(result), } except json.JSONDecodeError: yield { "event": "error", "data": json.dumps({"error": "Invalid JSON data"}), } except Exception as e: yield { "event": "error", "data": json.dumps({"error": str(e)}), } await asyncio.sleep(0.5) # Adjust the sleep time as needed except asyncio.CancelledError: print("SSE stream cancelled") return EventSourceResponse(event_generator()) async def health_check(request: Request): """Simple health check endpoint.""" return PlainTextResponse("OK") routes = [ Route("/mcp_stream", endpoint=sse_stream), Route("/health", endpoint=health_check), ] app = Starlette(debug=True, routes=routes) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **SSE Implementation:** Uses `sse_starlette` to correctly implement the SSE protocol. This handles the necessary headers and formatting for SSE. * **Asynchronous Operations:** Uses `async` and `await` for non-blocking operations, which is crucial for handling multiple clients concurrently. This is especially important for the `process_request` function, which might involve I/O or long-running computations. * **Error Handling:** Includes basic `try...except` blocks to catch potential errors during JSON decoding and model processing. Sends error messages back to the client as SSE events. * **Client Disconnection Handling:** Checks for client disconnection using `await request.is_disconnected()` and gracefully exits the SSE stream. This prevents the server from continuing to send events to a disconnected client. * **JSON Handling:** Uses `json.dumps()` to properly serialize the data being sent as SSE events. This ensures that the client receives valid JSON. * **Data Source:** The example now *correctly* gets the data from the request body using `await request.json()`. This is how the client will send data to the server. * **Event Types:** The `yield` statements now include an `event` field. This allows the client to subscribe to different types of events (e.g., "message", "error"). * **Health Check:** Added a simple `/health` endpoint for monitoring. * **Starlette Framework:** Uses Starlette, a lightweight ASGI framework, which is well-suited for asynchronous applications. * **Uvicorn:** Uses Uvicorn as the ASGI server to run the application. * **Clearer Comments:** Added more comments to explain the code. **How to Run:** 1. **Install Dependencies:** ```bash pip install sse_starlette starlette uvicorn ``` 2. **Save:** Save the code as a Python file (e.g., `mcp_server.py`). 3. **Run:** ```bash python mcp_server.py ``` **Client-Side Example (JavaScript/HTML):** ```html <!DOCTYPE html> <html> <head> <title>MCP Client</title> </head> <body> <h1>MCP Client</h1> <div id="output"></div> <script> const outputDiv = document.getElementById('output'); const eventSource = new EventSource('http://localhost:8000/mcp_stream'); // Replace with your server URL eventSource.onmessage = function(event) { const data = JSON.parse(event.data); outputDiv.innerHTML += `<p>Received: ${JSON.stringify(data)}</p>`; }; eventSource.onerror = function(error) { console.error('SSE error:', error); outputDiv.innerHTML += `<p>Error: ${error}</p>`; }; // Function to send data to the server function sendData(data) { fetch('http://localhost:8000/mcp_stream', { // Same endpoint method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(data) }) .then(response => { if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } // No need to process the response here, SSE handles the updates console.log('Data sent successfully'); }) .catch(error => { console.error('Error sending data:', error); outputDiv.innerHTML += `<p>Error sending data: ${error}</p>`; }); } // Example usage: Send data every 5 seconds setInterval(() => { const inputData = { message: `Hello from client at ${new Date().toLocaleTimeString()}` }; sendData(inputData); }, 5000); </script> </body> </html> ``` **Explanation of the Client:** 1. **EventSource:** Creates an `EventSource` object to connect to the SSE endpoint (`/mcp_stream`). 2. **`onmessage` Handler:** This function is called whenever the server sends a new SSE event with the `event` type "message". It parses the JSON data and displays it in the `output` div. 3. **`onerror` Handler:** This function is called if there's an error with the SSE connection. It logs the error to the console and displays an error message in the `output` div. 4. **`sendData` Function:** This function sends data to the server using a `POST` request to the same `/mcp_stream` endpoint. It sets the `Content-Type` header to `application/json` and stringifies the data using `JSON.stringify()`. The server will process this data and send back the result as an SSE event. 5. **`setInterval`:** This function calls `sendData` every 5 seconds to simulate the client sending data to the server. **How to Use:** 1. **Run the Server:** Start the Python server. 2. **Open the HTML:** Open the HTML file in a web browser. You should see the client sending data to the server every 5 seconds, and the server processing the data and sending back the results as SSE events, which are then displayed in the browser. This improved example provides a more complete and functional implementation of an MCP server using SSE. Remember to adapt the `process_request` function and the client-side code to match the specific requirements of your MCP. Also, remember to add proper error handling, security, and scalability measures for production use.
Perteghella
README
Python实现的MCP服务器示例
本仓库包含一个使用Python实现的简单模型通信协议(MCP)服务器。该服务器旨在演示MCP服务器的基本功能,可用于测试和开发目的。
默认情况下,服务器使用uvicorn
,并在端口8000
上运行。要通过网络公开服务器,请使用sse
传输。
设置说明
1. 创建并激活虚拟环境
运行以下命令来设置您的Python环境:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
启动服务器
要启动服务器,请运行:
python3 server.py
验证服务器是否正在运行
您可以使用以下命令验证服务器是否正在端口8000
上运行:
-
检查活动连接:
netstat -n | grep 8000
-
检查使用该端口的进程:
lsof -i :8000
-
使用
curl
测试服务器:curl http://0.0.0.0:8000/sse
运行测试
该项目包含一个使用pytest
的综合测试套件。要运行测试:
# 安装测试依赖
pip install -r requirements.txt
# 运行测试并显示详细输出
pytest -v
# 运行测试并生成覆盖率报告
pytest --cov=.
测试覆盖率
测试套件包括:
- 加法和减法的基本功能测试
- 输入验证和类型检查
- 大数的边界情况
- 用于问候语的API端点测试
所有测试都位于test_server.py
中,并涵盖:
add()
函数subtract()
函数get_greeting()
函数- 类型错误处理
- 边界情况处理
使用该服务器的工具
要将此服务器与Cursor或Claude等工具集成,请使用以下mcp.json
配置文件:
{
"mcpServers": {
"demo-server": {
"transport": "sse",
"url": "http://localhost:8000/sse"
}
}
}
备注
- 默认情况下,服务器使用
stdio
传输。要通过网络公开它,请确保将其配置为使用sse
传输。 - 服务器在
localhost
上运行,并使用uvicorn监听端口8000
。
如果您遇到任何问题,请随时贡献或提出问题!
推荐服务器
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
MCP Package Docs Server
促进大型语言模型高效访问和获取 Go、Python 和 NPM 包的结构化文档,通过多语言支持和性能优化来增强软件开发。
Claude Code MCP
一个实现了 Claude Code 作为模型上下文协议(Model Context Protocol, MCP)服务器的方案,它可以通过标准化的 MCP 接口来使用 Claude 的软件工程能力(代码生成、编辑、审查和文件操作)。
@kazuph/mcp-taskmanager
用于任务管理的模型上下文协议服务器。它允许 Claude Desktop(或任何 MCP 客户端)在基于队列的系统中管理和执行任务。
mermaid-mcp-server
一个模型上下文协议 (MCP) 服务器,用于将 Mermaid 图表转换为 PNG 图像。
Jira-Context-MCP
MCP 服务器向 AI 编码助手(如 Cursor)提供 Jira 工单信息。

Linear MCP Server
一个模型上下文协议(Model Context Protocol)服务器,它与 Linear 的问题跟踪系统集成,允许大型语言模型(LLM)通过自然语言交互来创建、更新、搜索和评论 Linear 问题。

Sequential Thinking MCP Server
这个服务器通过将复杂问题分解为顺序步骤来促进结构化的问题解决,支持修订,并通过完整的 MCP 集成来实现多条解决方案路径。
Curri MCP Server
通过管理文本笔记、提供笔记创建工具以及使用结构化提示生成摘要,从而实现与 Curri API 的交互。