发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 12,252 个能力。
Perspective MCP Server
一个模型上下文协议 (MCP) 服务器,它提供与 Perspective API 交互的工具。
AI Master Control Program (MCP) Server
AI主控程序 (MCP) 服务器 - 使AI模型能够与您的系统交互
Local Mcp Server Tutorial
好的,以下是一个创建本地 MCP 服务器(stdio)的教程: **创建本地 MCP 服务器 (stdio) 教程** **什么是 MCP?** MCP (Minecraft Coder Pack) 是一个用于反编译、反混淆和重新编译 Minecraft 代码的工具。它允许开发者更容易地理解和修改 Minecraft 的内部工作原理。 **什么是 stdio?** stdio (standard input/output) 是一种通信方式,允许程序通过标准输入和标准输出流进行交互。在这种情况下,本地 MCP 服务器将通过 stdio 与客户端进行通信。 **步骤:** **1. 安装 Java Development Kit (JDK)** 确保你的系统上安装了 Java Development Kit (JDK)。你需要 JDK 8 或更高版本。你可以从 Oracle 网站或你的发行版的包管理器下载并安装 JDK。 **2. 下载 MCP** 从 MCP 的官方网站或 GitHub 仓库下载最新版本的 MCP。 **3. 解压 MCP** 将下载的 MCP 压缩包解压到你选择的目录。 **4. 配置 MCP** * **`conf/mcp.cfg`:** 打开 `conf/mcp.cfg` 文件并根据你的需要进行配置。重要的配置项包括: * `MCP_LOC`: MCP 的根目录。 * `SRG_DIR`: SRG (Searge) 映射文件的目录。 * `BIN_DIR`: Minecraft 客户端和服务器 jar 文件的目录。 * `PATCHES_DIR`: 补丁文件的目录。 * `REOBF_PATCHES_DIR`: 反混淆补丁文件的目录。 * `DOCS_DIR`: 文档文件的目录。 * `VERSION`: Minecraft 的版本。 * **`conf/versions.cfg`:** 确保 `conf/versions.cfg` 文件包含你想要使用的 Minecraft 版本的配置。 **5. 获取 Minecraft 客户端和服务器 jar 文件** 你需要获取与你想要使用的 Minecraft 版本相对应的客户端和服务器 jar 文件。你可以从 Minecraft 启动器或 Minecraft 官方网站下载这些文件。将这些文件放置在 `jars/` 目录下。 **6. 反编译 Minecraft** 打开命令行终端,导航到 MCP 的根目录,并运行以下命令: ```bash ./decompile.sh ``` 或者,在 Windows 上: ```batch decompile.bat ``` 这将反编译 Minecraft 客户端和服务器代码。这个过程可能需要一些时间。 **7. 创建 stdio 服务器** 创建一个 Java 程序,该程序将作为本地 MCP 服务器运行。以下是一个简单的示例: ```java import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintWriter; public class MCPStdioServer { public static void main(String[] args) throws IOException { BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); PrintWriter writer = new PrintWriter(System.out, true); String line; while ((line = reader.readLine()) != null) { // 在这里处理来自客户端的请求 // 例如,你可以执行 MCP 命令并返回结果 // 示例:将收到的消息回显给客户端 writer.println("Server received: " + line); } } } ``` **8. 编译 stdio 服务器** 使用 JDK 编译你的 Java 程序。 ```bash javac MCPStdioServer.java ``` **9. 运行 stdio 服务器** 运行编译后的 Java 程序。 ```bash java MCPStdioServer ``` **10. 创建客户端** 创建一个客户端程序,该程序将通过 stdio 与本地 MCP 服务器进行通信。你可以使用任何编程语言来创建客户端。 **11. 与服务器通信** 客户端程序需要通过标准输入向服务器发送请求,并通过标准输出接收响应。 **示例客户端 (Python):** ```python import sys def send_command(command): print(command, flush=True) # 发送命令并刷新输出 response = sys.stdin.readline().strip() # 读取服务器的响应 return response if __name__ == "__main__": response = send_command("Hello from client!") print("Server response:", response) response = send_command("Another command") print("Server response:", response) ``` **运行客户端:** ```bash python your_client.py ``` **重要提示:** * 你需要根据你的具体需求修改服务器和客户端代码。 * 你需要实现 MCP 命令的处理逻辑,以便服务器能够执行客户端请求的操作。 * 确保你的服务器和客户端使用相同的协议进行通信。 **总结:** 这个教程提供了一个创建本地 MCP 服务器 (stdio) 的基本框架。你需要根据你的具体需求进行修改和扩展。记住,理解 MCP 的工作原理以及 Minecraft 的代码结构对于成功创建本地 MCP 服务器至关重要。 希望这个教程对你有所帮助!
MCP Gemini Server
镜子 (jìng zi)
MCP Tools Suite
用于管理模型上下文协议 (MCP) 服务器的综合工具包
fpl-server
FPL 的 MCP 服务器
MCP Server
mcp-server-template
MCP Demo
使用 Python 构建 Slack 的 MCP(管理控制面板)服务器。
PostgreSQL MCP Server (Model Context Protocol)
基于 FastMCP 的 MCP 服务器来控制 Postgres
mcp-servers-scratch
MCP 服务器 (MCP fúwùqì)
mcp-server-jupyter
用于 Jupyter Notebook 和 JupyterLab 的 MCP 服务器
Notion MCP Server
一个用于 Notion 集成的简单 MCP 服务器实现
MCP Servers
MCP (模型上下文协议) 服务器及相关资源库
@container-inc/mcp
针对 Container Inc. 的自动化部署的 MCP 服务器
Template project to build MCP server using SpringBoot
ADB MCP Server
用于 Android 调试桥 (ADB) 的 MCP 服务器,使 Claude 能够与 Android 设备交互。
MCP Server Cookie Cutter Template
用于创建 MCP (模型控制协议) 服务器的 Cookiecutter 模板
Claude MCP Server
kickstart-mcp
🚀 Kickstart-mcp 是一个关于使用 MCP 的教程,教你如何创建自己的 MCP 服务器/客户端。我们将引导你完成 MCP 之旅的每一步。
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using Gemini LLM, along with explanations and considerations: **Core Idea:** This pattern leverages SSE for real-time, unidirectional (server-to-client) communication of LLM-generated content. MCP provides a structured way to manage the conversation flow and metadata. The server uses Gemini LLM to generate responses, and streams them to the client via SSE. The client displays the content as it arrives. **Components:** 1. **MCP Client (Frontend - e.g., Web Browser, Mobile App):** * **Initiates Conversation:** Sends an initial message (e.g., user query) to the server via a standard HTTP request (POST or GET). This request includes MCP metadata (e.g., conversation ID, message ID, user ID). * **Establishes SSE Connection:** After the initial request, the client opens an SSE connection to a specific endpoint on the server. This endpoint is dedicated to receiving streaming responses for the given conversation. * **Receives SSE Events:** Listens for `message` events from the SSE stream. Each event contains a chunk of the LLM-generated response, along with MCP metadata. * **Reconstructs and Displays Response:** As events arrive, the client appends the data to a display area, providing a real-time, streaming experience. * **Handles Errors and Completion:** Listens for specific SSE events (e.g., `error`, `done`) to handle errors or signal the completion of the LLM response. * **Manages Conversation State:** The client may need to store the conversation ID and other relevant metadata to maintain context for subsequent requests. * **Sends Subsequent Messages:** After receiving a complete response, the client can send new messages to the server, continuing the conversation. These messages are sent via standard HTTP requests, and a new SSE stream is established for the response. 2. **MCP Server (Backend - e.g., Node.js, Python/Flask, Java/Spring Boot):** * **Receives Initial Request:** Handles the initial HTTP request from the client containing the user's query and MCP metadata. * **Validates Request:** Validates the request and MCP metadata. * **Interacts with Gemini LLM:** Sends the user's query to the Gemini LLM API. Crucially, it uses the streaming capabilities of the Gemini API (if available). * **Generates SSE Events:** As the Gemini LLM generates text, the server creates SSE `message` events. Each event contains a chunk of the generated text and relevant MCP metadata (e.g., conversation ID, message ID, chunk ID). * **Manages SSE Connections:** Maintains a list of active SSE connections. Each connection is associated with a specific conversation ID. * **Sends SSE Events to Client:** Pushes the SSE events to the appropriate client connection. * **Handles Errors:** If an error occurs during LLM generation, the server sends an `error` event to the client via SSE. * **Sends Completion Event:** When the LLM response is complete, the server sends a `done` event to the client via SSE. * **Manages Conversation State:** The server stores the conversation history and other relevant metadata. This is essential for maintaining context across multiple turns in the conversation. A database (e.g., PostgreSQL, MongoDB) is typically used for this purpose. * **MCP Implementation:** The server needs to implement the MCP protocol, including message formatting, routing, and error handling. **Gemini LLM Integration:** * **Streaming API:** Use the Gemini LLM's streaming API (if available). This allows the server to receive the LLM's response in chunks, which can then be immediately sent to the client via SSE. * **Prompt Engineering:** Carefully design prompts to guide the LLM's responses and ensure they are appropriate for the conversation. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Error Handling:** Handle errors from the LLM API gracefully. If an error occurs, send an `error` event to the client via SSE. **MCP Considerations:** * **Message Format:** Define a clear message format for MCP messages. This format should include fields for: * Conversation ID * Message ID * User ID * Message Type (e.g., `user_message`, `llm_response`, `error`, `done`) * Payload (the actual text of the message) * Chunk ID (for SSE streaming) * **Routing:** Implement a routing mechanism to direct messages to the correct conversation. * **Error Handling:** Define a standard way to handle errors. This should include error codes and error messages. * **Security:** Implement security measures to protect against unauthorized access and data breaches. **Example (Conceptual - Python/Flask):** ```python from flask import Flask, request, Response, stream_with_context import google.generativeai as genai import os import json import time app = Flask(__name__) # Configure Gemini API (replace with your API key) GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY") genai.configure(api_key=GOOGLE_API_KEY) model = genai.GenerativeModel('gemini-pro') # Or 'gemini-pro-vision' # In-memory conversation store (replace with a database in production) conversations = {} def generate_llm_response_stream(conversation_id, user_message): """Generates a streaming response from Gemini LLM.""" global conversations if conversation_id not in conversations: conversations[conversation_id] = [] conversations[conversation_id].append({"role": "user", "parts": [user_message]}) try: chat = model.start_chat(history=conversations[conversation_id]) response = chat.send_message(user_message, stream=True) for chunk in response: llm_text = chunk.text conversations[conversation_id].append({"role": "model", "parts": [llm_text]}) # Store LLM response mcp_message = { "conversation_id": conversation_id, "message_type": "llm_response", "payload": llm_text } yield f"data: {json.dumps(mcp_message)}\n\n" time.sleep(0.1) # Simulate processing time mcp_done_message = { "conversation_id": conversation_id, "message_type": "done" } yield f"data: {json.dumps(mcp_done_message)}\n\n" except Exception as e: mcp_error_message = { "conversation_id": conversation_id, "message_type": "error", "payload": str(e) } yield f"data: {json.dumps(mcp_error_message)}\n\n" @app.route('/chat', methods=['POST']) def chat_handler(): """Handles the initial chat request and establishes the SSE stream.""" data = request.get_json() conversation_id = data.get('conversation_id') user_message = data.get('message') if not conversation_id or not user_message: return "Missing conversation_id or message", 400 def stream(): yield from generate_llm_response_stream(conversation_id, user_message) return Response(stream_with_context(stream()), mimetype='text/event-stream') if __name__ == '__main__': app.run(debug=True, port=5000) ``` **Client-Side Example (JavaScript):** ```javascript const conversationId = 'unique-conversation-id'; // Generate a unique ID const messageInput = document.getElementById('messageInput'); const chatOutput = document.getElementById('chatOutput'); const sendButton = document.getElementById('sendButton'); sendButton.addEventListener('click', () => { const message = messageInput.value; messageInput.value = ''; // 1. Send initial message via HTTP POST fetch('/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ conversation_id: conversationId, message: message }) }).then(() => { // 2. Establish SSE connection const eventSource = new EventSource('/chat'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log("Received SSE event:", data); if (data.message_type === 'llm_response') { chatOutput.textContent += data.payload; } else if (data.message_type === 'done') { console.log('LLM response complete.'); eventSource.close(); // Close the SSE connection } else if (data.message_type === 'error') { console.error('Error from server:', data.payload); chatOutput.textContent += `Error: ${data.payload}`; eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; }).catch(error => { console.error("Error sending initial message:", error); }); }); ``` **Key Improvements and Considerations:** * **Error Handling:** Robust error handling is crucial. The server should catch exceptions during LLM generation and send error events to the client. The client should display these errors to the user. * **Conversation History:** The server *must* maintain a conversation history to provide context for subsequent requests. This can be stored in a database. The example code uses an in-memory store, which is not suitable for production. * **Security:** Implement appropriate security measures, such as authentication and authorization, to protect against unauthorized access. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the server from the LLM API. This can improve scalability and resilience. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Prompt Engineering:** Experiment with different prompts to optimize the LLM's responses. * **Token Management:** Be mindful of token limits for both input and output with the LLM. Implement strategies to truncate or summarize the conversation history if necessary. * **User Interface:** Design a user interface that provides a clear and intuitive experience for the user. Consider adding features such as: * Loading indicators * Error messages * Conversation history * **Metadata:** Include relevant metadata in the MCP messages, such as timestamps, user IDs, and message IDs. This can be helpful for debugging and analysis. * **Chunking Strategy:** Experiment with different chunking strategies to optimize the streaming experience. Smaller chunks will result in a more responsive UI, but may also increase overhead. * **Cancellation:** Implement a mechanism for the user to cancel a long-running LLM request. This can be done by sending a cancellation signal to the server, which can then terminate the LLM request. * **Context Management:** Consider using a more sophisticated context management strategy, such as retrieval-augmented generation (RAG), to provide the LLM with access to external knowledge sources. This comprehensive pattern provides a solid foundation for building SSE-based MCP clients and servers using Gemini LLM. Remember to adapt the code and configurations to your specific needs and environment. Good luck!
Quip MCP Server
用于获取 Quip 文档的 Model Context Protocol (MCP) 服务器
mcp-dev
尝试 MCP (chángshì MCP)
mocxykit
这是一个前端开发服务中间件,可以与 webpack 和 vite 一起使用。它的主要功能是可视化配置、管理 http(s) 代理和模拟数据。
DeepSource MCP Server
DeepSource 的模型上下文协议 (MCP) 服务器
Linkedin MCP Server
领英 API 的 MCP 服务器 (Lǐngyīng API de MCP fúwùqì) **Explanation:** * **领英 (Lǐngyīng):** LinkedIn (Chinese name) * **API:** API (commonly used as is in Chinese) * **的 (de):** Possessive particle, meaning "of" * **MCP 服务器 (MCP fúwùqì):** MCP Server (服务器 means server) Therefore, the whole translation means "An MCP Server for LinkedIn API".
🌈 Iris MCP Server
镜子 (jìng zi)
mcp-server-salesforce MCP server
mcp-prompts-rs
基于 Rust 的服务器,用于使用模型上下文协议 (MCP) 管理 AI 提示词
mattermost-mcp-server
镜子 (jìng zi)