发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 16,289 个能力。
Browserbase MCP Server
Enables cloud browser automation through Browserbase and Stagehand, allowing LLMs to interact with web pages, take screenshots, extract data, and perform automated actions with support for proxies, stealth mode, and parallel sessions.
MCP Manager
An enterprise-level MCP gateway and proxy that sits between an organization's MCP servers and clients. MCP Manager mitigates security threats, enables fine-grained permissions, enforces policies and guardrails, and generates comprehensive, end-to-end logs.
A2A Client MCP Server
一个 MCP 服务器,它使 LLM 能够与兼容 Agent-to-Agent (A2A) 协议的代理进行交互,从而允许发送消息、跟踪任务和接收流式响应。
Bluesky Context Server
一个简单的 MCP 服务器,可以使 MCP 客户端查询 Bluesky 实例。
MCP Alchemy
将 Claude Desktop 直接连接到数据库,使其能够通过 API 层探索数据库结构、编写 SQL 查询、分析数据集以及创建报告,该 API 层提供表探索和查询执行工具。
MCP Database Server
一个模型上下文协议服务器,它使大型语言模型(LLM)能够通过自然语言与数据库(目前是 MongoDB)进行交互,支持诸如查询、插入、删除文档以及运行聚合管道等操作。
Mermaid Chart MCP
Enables AI assistants to generate and render Mermaid diagrams (flowcharts, sequence diagrams, etc.) as PNG/SVG images with local file saving and HTTP access URLs. Supports batch processing and intelligent caching for efficient diagram creation.
Mcp Server Kakao Map
Kakao Map MCP 服务器 (Kakao Map MCP fúwùqì)
TuringCorp MCP Server
A remote MCP server template for Cloudflare Workers that enables deployment of custom tools without authentication. Provides easy integration with Claude Desktop and Cloudflare AI Playground for extending AI capabilities with custom functionality.
mcp-painter
用于人工智能助手的绘图工具 (Yòng yú rén gōng zhìnéng zhùshǒu de huìtú gōngjù) Or, more simply: AI助手绘图工具 (AI zhùshǒu huìtú gōngjù)
mcp-server-typescript
BOLD MCP Server
将 MCP 服务器连接到本地 LLM 以连接到 BOLD Rest API
File Converter MCP Server
一个 MCP 服务器,为 AI 代理提供多种文件转换工具,支持各种文档和图像格式转换,包括 DOCX 转 PDF、PDF 转 DOCX、图像转换、Excel 转 CSV、HTML 转 PDF 和 Markdown 转 PDF。
Divide and Conquer MCP Server
使 AI 代理能够使用结构化的 JSON 格式将复杂任务分解为可管理的部分,并具有任务跟踪、上下文保持和进度监控功能。
azure-mcp-server
一个模型上下文协议 (MCP) 服务器,它提供工具、提示和资源来交互和管理 Azure 资源。
Trade Surveillance Support MCP Server
Automates trade surveillance support workflows by parsing inquiry emails, searching SQL configs and Java code using keyword-based metadata annotations, executing reports, and generating comprehensive responses.
MCP LLM Bridge
实现MCP,以支持MCP服务器和兼容OpenAI的LLM之间的通信。
MCP BatchIt
一个简单的聚合服务器,允许将多个 MCP 工具调用批量处理成单个请求,从而减少 AI 代理的 token 使用量和网络开销。
Gmail MCP Agent
Enables automated Gmail lead nurturing campaigns with intelligent follow-ups, response tracking, and 24/7 operation. Supports CSV-based contact management, template personalization, and real-time monitoring for enterprise-scale email outreach.
Fast MCP Servers
LSPD Interrogation MCP Server
一个模型上下文协议服务器,用于模拟警方审讯,使用户能够创建警官档案,并根据可配置的参数(如压力等级、证据和犯罪类型)进行动态审讯,模拟嫌疑人的回应。
readme-updater-mcp
Okay, here's the translation of "MCP server to update README.md using Ollama for conflict analysis." into Chinese, along with a few options depending on the nuance you want to convey: **Option 1 (Most Literal and General):** * **MCP服务器使用Ollama更新README.md,进行冲突分析。** * (MCP fúwùqì shǐyòng Ollama gēngxīn README.md, jìnxíng chōngtú fēnxī.) * This is a straightforward translation. It's clear and understandable. **Option 2 (More Emphasis on the *Purpose* of Using Ollama):** * **MCP服务器使用Ollama来更新README.md,并进行冲突分析。** * (MCP fúwùqì shǐyòng Ollama lái gēngxīn README.md, bìng jìnxíng chōngtú fēnxī.) * The addition of "来 (lái)" emphasizes that Ollama is *being used in order to* update the README.md. "并 (bìng)" adds a slight emphasis on the "and" connecting the two actions. **Option 3 (Slightly More Technical/Formal):** * **MCP服务器利用Ollama进行冲突分析,以更新README.md。** * (MCP fúwùqì lìyòng Ollama jìnxíng chōngtú fēnxī, yǐ gēngxīn README.md.) * This version uses "利用 (lìyòng)" which means "utilize" or "leverage." It's a bit more formal and suggests a more deliberate use of Ollama's capabilities. The "以 (yǐ)" means "in order to" or "so as to." This version puts the conflict analysis first, implying it's a prerequisite for the update. **Option 4 (Focus on Ollama's Role in Resolving Conflicts):** * **MCP服务器使用Ollama进行冲突分析,从而更新README.md。** * (MCP fúwùqì shǐyòng Ollama jìnxíng chōngtú fēnxī, cóng'ér gēngxīn README.md.) * This option uses "从而 (cóng'ér)" which means "thereby" or "thus." It emphasizes that the conflict analysis *leads to* the update of the README.md. It implies that Ollama's conflict analysis directly enables the update. **Which one to choose?** * If you want a simple, direct translation, use **Option 1**. * If you want to emphasize the *purpose* of using Ollama, use **Option 2**. * If you want a slightly more technical and formal tone, and emphasize the conflict analysis as a prerequisite, use **Option 3**. * If you want to emphasize that the conflict analysis *directly enables* the update, use **Option 4**. I would probably lean towards **Option 1** or **Option 2** for most general purposes. They are clear and easy to understand.
Remote MCP Server
A Cloudflare Workers implementation of Model Context Protocol server that enables Claude AI to access external tools through OAuth authentication.
Prometheus MCP Server
一个 MCP 服务器,它使大型语言模型能够通过预定义的路由从 Prometheus 数据库检索、分析和查询指标数据。
주식 데이터 MCP 서버
Tester Client for Model Context Protocol (MCP)
Apify Actor 的模型上下文协议 (MCP) 客户端
mcp-qdrant-docs MCP Server
一个 MCP 服务器,它抓取网站,将内容索引到 Qdrant 中,并提供查询工具。
MCP-Censys
一个模型上下文协议服务器,它支持使用自然语言查询 Censys Search API,以进行域名、IP 和 FQDN 侦察,并提供关于主机、DNS、证书和服务方面的实时信息。
Starlette MCP SSE
Okay, here's a working example of a Starlette server with Server-Sent Events (SSE) based MCP (Message Channel Protocol) support. This example demonstrates a basic setup, including: * **Starlette Application:** The core web application. * **SSE Endpoint:** An endpoint that streams events to connected clients. * **MCP-like Structure:** A simplified structure for sending messages with a type and data. * **Basic Message Handling:** A simple example of how to handle different message types on the server. ```python import asyncio import json import time from typing import AsyncGenerator from starlette.applications import Starlette from starlette.responses import StreamingResponse from starlette.routing import Route # Define MCP Message Structure (Simplified) class MCPMessage: def __init__(self, type: str, data: dict): self.type = type self.data = data def to_json(self): return json.dumps({"type": self.type, "data": self.data}) # Global Queue for Messages (In-memory, for demonstration) message_queue = asyncio.Queue() async def event_stream(request): async def generate_events() -> AsyncGenerator[str, None]: try: while True: message: MCPMessage = await message_queue.get() # Get message from queue message_json = message.to_json() yield f"data: {message_json}\n\n" await asyncio.sleep(0.1) # Simulate some processing time except asyncio.CancelledError: print("Client disconnected, stopping event stream.") finally: print("Event stream generator finished.") return StreamingResponse(generate_events(), media_type="text/event-stream") async def send_test_messages(): """ Simulates sending messages to the queue. In a real application, these messages would come from other parts of your system. """ await asyncio.sleep(1) # Wait a bit before sending messages for i in range(5): message = MCPMessage(type="test_event", data={"message": f"Test message {i}"}) await message_queue.put(message) print(f"Sent message: {message.to_json()}") await asyncio.sleep(2) message = MCPMessage(type="status_update", data={"status": "Completed!"}) await message_queue.put(message) print(f"Sent message: {message.to_json()}") async def startup(): """ Startup function to start background tasks. """ asyncio.create_task(send_test_messages()) routes = [ Route("/events", endpoint=event_stream), ] app = Starlette(debug=True, routes=routes, on_startup=[startup]) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **MCPMessage Class:** Defines a simple class to represent MCP messages with `type` and `data` fields. This makes it easier to structure and serialize messages. The `to_json()` method converts the message to a JSON string for sending over SSE. * **`message_queue`:** An `asyncio.Queue` is used to hold messages that need to be sent to the SSE clients. This is crucial for decoupling the message producers from the SSE endpoint. The queue allows messages to be added from anywhere in your application. * **`event_stream` Function:** This is the SSE endpoint. It uses an `async generator` to continuously yield events to the client. Crucially, it retrieves messages from the `message_queue`. * **Error Handling (Client Disconnect):** The `try...except asyncio.CancelledError` block in the `generate_events` function is *essential*. It catches the `asyncio.CancelledError` that is raised when the client disconnects. Without this, your server will likely crash or throw errors when a client closes the connection. The `finally` block ensures cleanup. * **`send_test_messages` Function:** This function simulates sending messages to the queue. In a real application, these messages would come from other parts of your system (e.g., background tasks, API endpoints). It demonstrates how to put messages onto the queue. It uses `asyncio.sleep` to simulate delays. * **`startup` Function:** The `startup` function is registered with the Starlette application. It's used to start background tasks when the application starts. In this case, it starts the `send_test_messages` task. * **JSON Serialization:** The `json.dumps()` function is used to serialize the message data to JSON before sending it over SSE. This is the standard way to format data for SSE. * **SSE Format:** The `yield f"data: {message_json}\n\n"` line is *critical*. It formats the data correctly for SSE. Each event must be prefixed with `data: ` and followed by two newline characters (`\n\n`). * **Media Type:** The `StreamingResponse` is created with `media_type="text/event-stream"`. This tells the client that the server is sending SSE events. * **Uvicorn:** The example uses Uvicorn as the ASGI server. Make sure you have it installed (`pip install uvicorn`). * **Clearer Comments:** The code is heavily commented to explain each part. **How to Run:** 1. **Save:** Save the code as a Python file (e.g., `sse_mcp_server.py`). 2. **Install Dependencies:** ```bash pip install starlette uvicorn ``` 3. **Run:** ```bash python sse_mcp_server.py ``` 4. **Test with a Client:** Use a browser or a tool like `curl` to connect to the SSE endpoint. Here's an example using `curl`: ```bash curl -N http://localhost:8000/events ``` The `-N` option tells `curl` not to buffer the output, so you'll see the events as they arrive. **Example Client (JavaScript/HTML):** ```html <!DOCTYPE html> <html> <head> <title>SSE MCP Client</title> </head> <body> <h1>SSE MCP Client</h1> <div id="events"></div> <script> const eventSource = new EventSource('http://localhost:8000/events'); eventSource.onmessage = (event) => { const eventsDiv = document.getElementById('events'); const message = JSON.parse(event.data); // Parse the JSON eventsDiv.innerHTML += `<p>Type: ${message.type}, Data: ${JSON.stringify(message.data)}</p>`; }; eventSource.onerror = (error) => { console.error("SSE error:", error); const eventsDiv = document.getElementById('events'); eventsDiv.innerHTML += "<p>Error connecting to SSE server.</p>"; eventSource.close(); // Close the connection on error }; </script> </body> </html> ``` Save this as an HTML file (e.g., `sse_mcp_client.html`) and open it in your browser. Make sure the server is running. **Important Considerations for Production:** * **Error Handling:** Implement robust error handling on both the server and client. Handle connection errors, message parsing errors, and other potential issues. * **Scalability:** For production, consider using a more scalable message queue (e.g., Redis, RabbitMQ) instead of the in-memory `asyncio.Queue`. * **Authentication/Authorization:** Implement authentication and authorization to protect your SSE endpoint. * **Connection Management:** Keep track of connected clients and handle disconnections gracefully. * **Message Format:** Define a clear and consistent message format for your MCP protocol. Consider using a schema validation library to ensure that messages are valid. * **Heartbeats:** Implement heartbeats to detect dead connections. The server can periodically send a "ping" message, and the client can respond with a "pong" message. If the server doesn't receive a "pong" within a certain time, it can close the connection. * **Reconnection:** The client should automatically attempt to reconnect if the connection is lost. The `EventSource` API has built-in reconnection logic, but you may need to customize it. * **Buffering:** Be aware of potential buffering issues. The server and client may buffer messages, which can lead to delays. You may need to adjust the buffer sizes to optimize performance. **Chinese Translation of Key Concepts:** * **Server-Sent Events (SSE):** 服务器发送事件 (Fúwùqì fāsòng shìjiàn) * **Message Channel Protocol (MCP):** 消息通道协议 (Xiāoxī tōngdào xiéyì) * **Starlette:** (No direct translation, usually referred to by its English name) * **Endpoint:** 端点 (Duāndiǎn) * **Asynchronous:** 异步 (Yìbù) * **Queue:** 队列 (Duìliè) * **Message:** 消息 (Xiāoxī) * **Client:** 客户端 (Kèhùduān) * **Server:** 服务器 (Fúwùqì) * **JSON:** JSON (Usually referred to by its English name, but can be translated as JavaScript 对象表示法 - JavaScript duìxiàng biǎoshì fǎ) * **Streaming:** 流式传输 (Liúshì chuánshū) This comprehensive example provides a solid foundation for building a Starlette server with SSE-based MCP support. Remember to adapt it to your specific needs and consider the production considerations mentioned above.
MariaDB / MySQL Database Access MCP Server
一个提供对 MariaDB 或 MySQL 数据库访问的 MCP 服务器。