发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 12,252 个能力。
MCP Config
用于管理 MCP 服务器配置的 CLI 工具
MCP Stdio Server (MySQL/MariaDB)
Project Hub MCP Server
用于管理软件项目的 MCP 服务器,提供项目跟踪、笔记记录和 GitHub 集成等工具。
BOLD MCP Server
将 MCP 服务器连接到本地 LLM 以连接到 BOLD Rest API
Fast MCP Servers
fhir-mcp-server-medagentbench
一个用于 MedAgentBench (FHIR 请求生成) 的模型上下文协议服务器的实现。
AI-Powered Server-Client Computer Use SDK
关于在服务器和客户端上使用人工智能软件开发工具包 (SDK) 的计算机系统信息文件。
Tester Client for Model Context Protocol (MCP)
Apify Actor 的模型上下文协议 (MCP) 客户端
MariaDB / MySQL Database Access MCP Server
一个提供对 MariaDB 或 MySQL 数据库访问的 MCP 服务器。
azure-mcp-server
一个模型上下文协议 (MCP) 服务器,它提供工具、提示和资源来交互和管理 Azure 资源。
MCP LLM Bridge
实现MCP,以支持MCP服务器和兼容OpenAI的LLM之间的通信。
readme-updater-mcp
Okay, here's the translation of "MCP server to update README.md using Ollama for conflict analysis." into Chinese, along with a few options depending on the nuance you want to convey: **Option 1 (Most Literal and General):** * **MCP服务器使用Ollama更新README.md,进行冲突分析。** * (MCP fúwùqì shǐyòng Ollama gēngxīn README.md, jìnxíng chōngtú fēnxī.) * This is a straightforward translation. It's clear and understandable. **Option 2 (More Emphasis on the *Purpose* of Using Ollama):** * **MCP服务器使用Ollama来更新README.md,并进行冲突分析。** * (MCP fúwùqì shǐyòng Ollama lái gēngxīn README.md, bìng jìnxíng chōngtú fēnxī.) * The addition of "来 (lái)" emphasizes that Ollama is *being used in order to* update the README.md. "并 (bìng)" adds a slight emphasis on the "and" connecting the two actions. **Option 3 (Slightly More Technical/Formal):** * **MCP服务器利用Ollama进行冲突分析,以更新README.md。** * (MCP fúwùqì lìyòng Ollama jìnxíng chōngtú fēnxī, yǐ gēngxīn README.md.) * This version uses "利用 (lìyòng)" which means "utilize" or "leverage." It's a bit more formal and suggests a more deliberate use of Ollama's capabilities. The "以 (yǐ)" means "in order to" or "so as to." This version puts the conflict analysis first, implying it's a prerequisite for the update. **Option 4 (Focus on Ollama's Role in Resolving Conflicts):** * **MCP服务器使用Ollama进行冲突分析,从而更新README.md。** * (MCP fúwùqì shǐyòng Ollama jìnxíng chōngtú fēnxī, cóng'ér gēngxīn README.md.) * This option uses "从而 (cóng'ér)" which means "thereby" or "thus." It emphasizes that the conflict analysis *leads to* the update of the README.md. It implies that Ollama's conflict analysis directly enables the update. **Which one to choose?** * If you want a simple, direct translation, use **Option 1**. * If you want to emphasize the *purpose* of using Ollama, use **Option 2**. * If you want a slightly more technical and formal tone, and emphasize the conflict analysis as a prerequisite, use **Option 3**. * If you want to emphasize that the conflict analysis *directly enables* the update, use **Option 4**. I would probably lean towards **Option 1** or **Option 2** for most general purposes. They are clear and easy to understand.
OpenSumi
一个框架可以帮助您快速构建 AI 原生 IDE 产品。MCP Client 通过 MCP 服务器支持模型上下文协议 (MCP) 工具。
Manus MCP
提供类似 Manus 功能的 MCP 服务器
mcp-server
Brest MCP Server
mcp-pandoc-ts: A Document Conversion MCP Server (TypeScript/Host Service Version)
使用本地 Pandoc 主机服务,从 Docker 环境控制主机上的 Pandoc 的 MCP-Server
JigsawStack MCP Server
允许 AI 模型与 JigsawStack 模型交互的模型上下文协议服务器!
Go Process Inspector
非侵入式 goroutine 检查器
GoScry
GoScry 是一个用 Go 编写的服务器应用程序,它充当控制系统(例如 LLM 或脚本)和 Web 浏览器之间的桥梁。
Futuur API MCP Integration
Futuur API MCP 集成是一个强大的基于 TypeScript 的服务器,它实现了模型上下文协议 (MCP),可与 Futuur API 无缝集成。 该项目为处理市场数据、类别、用户信息和投注操作提供了一个强大的接口。
Starlette MCP SSE
Okay, here's a working example of a Starlette server with Server-Sent Events (SSE) based MCP (Message Channel Protocol) support. This example demonstrates a basic setup, including: * **Starlette Application:** The core web application. * **SSE Endpoint:** An endpoint that streams events to connected clients. * **MCP-like Structure:** A simplified structure for sending messages with a type and data. * **Basic Message Handling:** A simple example of how to handle different message types on the server. ```python import asyncio import json import time from typing import AsyncGenerator from starlette.applications import Starlette from starlette.responses import StreamingResponse from starlette.routing import Route # Define MCP Message Structure (Simplified) class MCPMessage: def __init__(self, type: str, data: dict): self.type = type self.data = data def to_json(self): return json.dumps({"type": self.type, "data": self.data}) # Global Queue for Messages (In-memory, for demonstration) message_queue = asyncio.Queue() async def event_stream(request): async def generate_events() -> AsyncGenerator[str, None]: try: while True: message: MCPMessage = await message_queue.get() # Get message from queue message_json = message.to_json() yield f"data: {message_json}\n\n" await asyncio.sleep(0.1) # Simulate some processing time except asyncio.CancelledError: print("Client disconnected, stopping event stream.") finally: print("Event stream generator finished.") return StreamingResponse(generate_events(), media_type="text/event-stream") async def send_test_messages(): """ Simulates sending messages to the queue. In a real application, these messages would come from other parts of your system. """ await asyncio.sleep(1) # Wait a bit before sending messages for i in range(5): message = MCPMessage(type="test_event", data={"message": f"Test message {i}"}) await message_queue.put(message) print(f"Sent message: {message.to_json()}") await asyncio.sleep(2) message = MCPMessage(type="status_update", data={"status": "Completed!"}) await message_queue.put(message) print(f"Sent message: {message.to_json()}") async def startup(): """ Startup function to start background tasks. """ asyncio.create_task(send_test_messages()) routes = [ Route("/events", endpoint=event_stream), ] app = Starlette(debug=True, routes=routes, on_startup=[startup]) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **MCPMessage Class:** Defines a simple class to represent MCP messages with `type` and `data` fields. This makes it easier to structure and serialize messages. The `to_json()` method converts the message to a JSON string for sending over SSE. * **`message_queue`:** An `asyncio.Queue` is used to hold messages that need to be sent to the SSE clients. This is crucial for decoupling the message producers from the SSE endpoint. The queue allows messages to be added from anywhere in your application. * **`event_stream` Function:** This is the SSE endpoint. It uses an `async generator` to continuously yield events to the client. Crucially, it retrieves messages from the `message_queue`. * **Error Handling (Client Disconnect):** The `try...except asyncio.CancelledError` block in the `generate_events` function is *essential*. It catches the `asyncio.CancelledError` that is raised when the client disconnects. Without this, your server will likely crash or throw errors when a client closes the connection. The `finally` block ensures cleanup. * **`send_test_messages` Function:** This function simulates sending messages to the queue. In a real application, these messages would come from other parts of your system (e.g., background tasks, API endpoints). It demonstrates how to put messages onto the queue. It uses `asyncio.sleep` to simulate delays. * **`startup` Function:** The `startup` function is registered with the Starlette application. It's used to start background tasks when the application starts. In this case, it starts the `send_test_messages` task. * **JSON Serialization:** The `json.dumps()` function is used to serialize the message data to JSON before sending it over SSE. This is the standard way to format data for SSE. * **SSE Format:** The `yield f"data: {message_json}\n\n"` line is *critical*. It formats the data correctly for SSE. Each event must be prefixed with `data: ` and followed by two newline characters (`\n\n`). * **Media Type:** The `StreamingResponse` is created with `media_type="text/event-stream"`. This tells the client that the server is sending SSE events. * **Uvicorn:** The example uses Uvicorn as the ASGI server. Make sure you have it installed (`pip install uvicorn`). * **Clearer Comments:** The code is heavily commented to explain each part. **How to Run:** 1. **Save:** Save the code as a Python file (e.g., `sse_mcp_server.py`). 2. **Install Dependencies:** ```bash pip install starlette uvicorn ``` 3. **Run:** ```bash python sse_mcp_server.py ``` 4. **Test with a Client:** Use a browser or a tool like `curl` to connect to the SSE endpoint. Here's an example using `curl`: ```bash curl -N http://localhost:8000/events ``` The `-N` option tells `curl` not to buffer the output, so you'll see the events as they arrive. **Example Client (JavaScript/HTML):** ```html <!DOCTYPE html> <html> <head> <title>SSE MCP Client</title> </head> <body> <h1>SSE MCP Client</h1> <div id="events"></div> <script> const eventSource = new EventSource('http://localhost:8000/events'); eventSource.onmessage = (event) => { const eventsDiv = document.getElementById('events'); const message = JSON.parse(event.data); // Parse the JSON eventsDiv.innerHTML += `<p>Type: ${message.type}, Data: ${JSON.stringify(message.data)}</p>`; }; eventSource.onerror = (error) => { console.error("SSE error:", error); const eventsDiv = document.getElementById('events'); eventsDiv.innerHTML += "<p>Error connecting to SSE server.</p>"; eventSource.close(); // Close the connection on error }; </script> </body> </html> ``` Save this as an HTML file (e.g., `sse_mcp_client.html`) and open it in your browser. Make sure the server is running. **Important Considerations for Production:** * **Error Handling:** Implement robust error handling on both the server and client. Handle connection errors, message parsing errors, and other potential issues. * **Scalability:** For production, consider using a more scalable message queue (e.g., Redis, RabbitMQ) instead of the in-memory `asyncio.Queue`. * **Authentication/Authorization:** Implement authentication and authorization to protect your SSE endpoint. * **Connection Management:** Keep track of connected clients and handle disconnections gracefully. * **Message Format:** Define a clear and consistent message format for your MCP protocol. Consider using a schema validation library to ensure that messages are valid. * **Heartbeats:** Implement heartbeats to detect dead connections. The server can periodically send a "ping" message, and the client can respond with a "pong" message. If the server doesn't receive a "pong" within a certain time, it can close the connection. * **Reconnection:** The client should automatically attempt to reconnect if the connection is lost. The `EventSource` API has built-in reconnection logic, but you may need to customize it. * **Buffering:** Be aware of potential buffering issues. The server and client may buffer messages, which can lead to delays. You may need to adjust the buffer sizes to optimize performance. **Chinese Translation of Key Concepts:** * **Server-Sent Events (SSE):** 服务器发送事件 (Fúwùqì fāsòng shìjiàn) * **Message Channel Protocol (MCP):** 消息通道协议 (Xiāoxī tōngdào xiéyì) * **Starlette:** (No direct translation, usually referred to by its English name) * **Endpoint:** 端点 (Duāndiǎn) * **Asynchronous:** 异步 (Yìbù) * **Queue:** 队列 (Duìliè) * **Message:** 消息 (Xiāoxī) * **Client:** 客户端 (Kèhùduān) * **Server:** 服务器 (Fúwùqì) * **JSON:** JSON (Usually referred to by its English name, but can be translated as JavaScript 对象表示法 - JavaScript duìxiàng biǎoshì fǎ) * **Streaming:** 流式传输 (Liúshì chuánshū) This comprehensive example provides a solid foundation for building a Starlette server with SSE-based MCP support. Remember to adapt it to your specific needs and consider the production considerations mentioned above.
Essentials
Essentials 是一个 MCP 服务器,它提供便捷的 MCP 功能。
Mcp Autotest
用于自动测试 MCP 服务器的实用程序
欢迎来到 智言平台
AgentChat 是一个与 Agent 交流的平台,它包含默认 Agent,并支持自定义 Agent。它可以实现多轮问答,使 Agent 帮助用户实现想要的功能。该项目技术栈包括 LLM(大型语言模型)、LangChain、Function call(函数调用)、ReAct(Reasoning and Acting,推理和行动)、MCP(多通道处理)、Milvus(向量数据库)、ElasticSearch(搜索引擎)、RAG(检索增强生成)、FastAPI(Web 框架)。 **更简洁的翻译:** AgentChat 是一个 Agent 交互平台,内置默认 Agent 并支持自定义 Agent。它通过多轮对话,让 Agent 帮助用户实现各种功能。该项目采用的技术栈包括:LLM、LangChain、函数调用、ReAct、MCP、Milvus、ElasticSearch、RAG 和 FastAPI。
MCP Servers
这个仓库包含了我关于如何创建 MCP 服务器的学习笔记。
cmd-line-executor MCP server
实验性 MCP 服务器,用于执行命令行指令。
MCP Go SDK
好的,以下是将“Build Model Context Protocol (MCP) servers in Go”翻译成中文的几种选择,并附带一些解释: **1. 最直接的翻译:** * **在 Go 中构建模型上下文协议 (MCP) 服务器** * 这是最字面的翻译,保留了所有技术术语。适合对 MCP 和 Go 已经有一定了解的读者。 **2. 更易理解的翻译:** * **使用 Go 语言开发模型上下文协议 (MCP) 服务器** * 这种翻译更强调了“开发”这个动作,使句子更流畅。 **3. 针对特定受众的翻译:** * **用 Go 实现模型上下文协议 (MCP) 服务器** * 如果目标读者是程序员,这种翻译更简洁,更强调“实现”这个技术细节。 **4. 解释性翻译 (如果需要更详细的说明):** * **使用 Go 语言构建符合模型上下文协议 (MCP) 的服务器** * 这种翻译明确指出服务器需要符合 MCP 协议。 **选择哪个翻译取决于你的目标受众和语境。** 一般来说,第一个或第二个翻译就足够了。 如果你需要更强调技术细节,可以选择第三个。 如果你需要更明确地说明服务器需要符合 MCP 协议,可以选择第四个。 **总结:** 我推荐使用以下两种翻译之一: * **在 Go 中构建模型上下文协议 (MCP) 服务器** * **使用 Go 语言开发模型上下文协议 (MCP) 服务器** 希望这些信息对您有所帮助!
minibridge
MCP 与世界之间的安全桥梁
Buildkite MCP Server
用于 Buildkite 集成的模型上下文协议 (MCP) 服务器