发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 16,289 个能力。
Financial Analysis MCP Server
一个 MCP 服务器,为你的 LLM 提供金融分析能力。
ZBD MCP Server
一个为大型语言模型 (LLM) 添加比特币支付功能的服务器,使其能够发送/接收付款、创建收费、管理钱包以及执行其他比特币闪电网络操作。
nREPL MCP Server
启用与正在运行的 Clojure nREPL 实例的交互,用于评估 Clojure 代码、检查命名空间和检索连接状态,兼容 MCP 客户端,如 Claude Desktop 和 VSCode 中的 CLine。
Wireshark MCP
A Model Context Protocol server that integrates Wireshark's network analysis capabilities with AI systems like Claude, allowing direct analysis of network packet data without manual copying.
MCP Terminal
一个服务器,它使 AI 助手能够执行终端命令,并通过模型上下文协议 (Model Context Protocol, MCP) 检索输出。
BlazeMeter MCP Server
Provides programmatic access to BlazeMeter's performance testing platform through MCP tools. Enables users to retrieve test runs, analyze performance data, view error reports, and manage testing resources via natural language interactions.
Scrapeless MCP Server
一个模型上下文协议(Model Context Protocol)服务器实现,它使像 Claude 这样的人工智能助手能够执行 Google 搜索,并通过自然语言请求直接检索网络数据。
Seedream 4.0 MCP Server
Enables AI image generation using Volcano Engine's Seedream 4.0 API with text-to-image, image-to-image, multi-image fusion capabilities, built-in prompt templates, and automatic cloud storage integration.
SSH MCP Server
Enables SSH operations including connecting to remote servers, executing commands, and transferring files between local and remote systems. Supports multiple SSH connections with both password and private key authentication methods.
Local Falcon MCP Server
Connects AI systems to Local Falcon API, enabling access to local SEO reporting tools including scan reports, trend analysis, keyword tracking, and competitor data through the Model Context Protocol.
MCP Memory
Enables MCP clients to remember user information, preferences, and behaviors across conversations using vector search technology. Built on Cloudflare infrastructure with AI-powered semantic search to find relevant memories based on meaning rather than keywords.
mcp-pdf2md
PDF 转 Markdown 转换工具
Matomo MCP Server
A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.
@container-inc/mcp
针对 Container Inc. 的自动化部署的 MCP 服务器
FastAPI MCP Demo Server
A demonstration MCP server built with FastAPI that provides basic mathematical operations and greeting services. Integrates with Gemini CLI to showcase MCP protocol implementation with simple REST endpoints.
MCP-NOSTR
一个桥梁,通过实施模型上下文协议(MCP),使人工智能语言模型能够将内容发布到 Nostr 网络。
ADB MCP Server
用于 Android 调试桥 (ADB) 的 MCP 服务器,使 Claude 能够与 Android 设备交互。
PyMCP
Primarily to be used as a template for developing MCP servers with FastMCP in Python, PyMCP is somewhat inspired by the official everything MCP server in Typescript.
MCP Server Cookie Cutter Template
用于创建 MCP (模型控制协议) 服务器的 Cookiecutter 模板
Self-Hosted Supabase MCP Server
Enables developers to interact with self-hosted Supabase instances, providing database introspection, migration management, auth user operations, storage management, and TypeScript type generation directly from MCP-compatible development environments.
MCP-openproject
MCP-openproject
ONEDeFi MCP Server
Enables AI-powered DeFi operations across Ethereum, Polygon, and Solana with automated portfolio optimization, risk assessment, and yield farming strategies. Provides intelligent portfolio diagnostics, investment strategy generation, and multi-chain DeFi protocol integration through natural language.
YaVendió Tools
一个基于 MCP 的消息传递系统,允许人工智能系统通过标准化的工具与各种消息平台进行交互,这些工具可以用于发送文本、图片、文档、按钮和提醒。
Todoist MCP Server
Enables AI assistants to interact with Todoist tasks and projects through natural language. Supports comprehensive task management including creating, updating, completing tasks, managing projects, and filtering by various criteria.
Claude MCP Server
kickstart-mcp
🚀 Kickstart-mcp 是一个关于使用 MCP 的教程,教你如何创建自己的 MCP 服务器/客户端。我们将引导你完成 MCP 之旅的每一步。
Toast MCP Server
一个 MCP 服务器,可以在 Windows 10 和 macOS 上显示桌面通知,兼容 VSCode Cline,并支持自定义通知参数。
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using Gemini LLM, along with explanations and considerations: **Core Idea:** This pattern leverages SSE for real-time, unidirectional (server-to-client) communication of LLM-generated content. MCP provides a structured way to manage the conversation flow and metadata. The server uses Gemini LLM to generate responses, and streams them to the client via SSE. The client displays the content as it arrives. **Components:** 1. **MCP Client (Frontend - e.g., Web Browser, Mobile App):** * **Initiates Conversation:** Sends an initial message (e.g., user query) to the server via a standard HTTP request (POST or GET). This request includes MCP metadata (e.g., conversation ID, message ID, user ID). * **Establishes SSE Connection:** After the initial request, the client opens an SSE connection to a specific endpoint on the server. This endpoint is dedicated to receiving streaming responses for the given conversation. * **Receives SSE Events:** Listens for `message` events from the SSE stream. Each event contains a chunk of the LLM-generated response, along with MCP metadata. * **Reconstructs and Displays Response:** As events arrive, the client appends the data to a display area, providing a real-time, streaming experience. * **Handles Errors and Completion:** Listens for specific SSE events (e.g., `error`, `done`) to handle errors or signal the completion of the LLM response. * **Manages Conversation State:** The client may need to store the conversation ID and other relevant metadata to maintain context for subsequent requests. * **Sends Subsequent Messages:** After receiving a complete response, the client can send new messages to the server, continuing the conversation. These messages are sent via standard HTTP requests, and a new SSE stream is established for the response. 2. **MCP Server (Backend - e.g., Node.js, Python/Flask, Java/Spring Boot):** * **Receives Initial Request:** Handles the initial HTTP request from the client containing the user's query and MCP metadata. * **Validates Request:** Validates the request and MCP metadata. * **Interacts with Gemini LLM:** Sends the user's query to the Gemini LLM API. Crucially, it uses the streaming capabilities of the Gemini API (if available). * **Generates SSE Events:** As the Gemini LLM generates text, the server creates SSE `message` events. Each event contains a chunk of the generated text and relevant MCP metadata (e.g., conversation ID, message ID, chunk ID). * **Manages SSE Connections:** Maintains a list of active SSE connections. Each connection is associated with a specific conversation ID. * **Sends SSE Events to Client:** Pushes the SSE events to the appropriate client connection. * **Handles Errors:** If an error occurs during LLM generation, the server sends an `error` event to the client via SSE. * **Sends Completion Event:** When the LLM response is complete, the server sends a `done` event to the client via SSE. * **Manages Conversation State:** The server stores the conversation history and other relevant metadata. This is essential for maintaining context across multiple turns in the conversation. A database (e.g., PostgreSQL, MongoDB) is typically used for this purpose. * **MCP Implementation:** The server needs to implement the MCP protocol, including message formatting, routing, and error handling. **Gemini LLM Integration:** * **Streaming API:** Use the Gemini LLM's streaming API (if available). This allows the server to receive the LLM's response in chunks, which can then be immediately sent to the client via SSE. * **Prompt Engineering:** Carefully design prompts to guide the LLM's responses and ensure they are appropriate for the conversation. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Error Handling:** Handle errors from the LLM API gracefully. If an error occurs, send an `error` event to the client via SSE. **MCP Considerations:** * **Message Format:** Define a clear message format for MCP messages. This format should include fields for: * Conversation ID * Message ID * User ID * Message Type (e.g., `user_message`, `llm_response`, `error`, `done`) * Payload (the actual text of the message) * Chunk ID (for SSE streaming) * **Routing:** Implement a routing mechanism to direct messages to the correct conversation. * **Error Handling:** Define a standard way to handle errors. This should include error codes and error messages. * **Security:** Implement security measures to protect against unauthorized access and data breaches. **Example (Conceptual - Python/Flask):** ```python from flask import Flask, request, Response, stream_with_context import google.generativeai as genai import os import json import time app = Flask(__name__) # Configure Gemini API (replace with your API key) GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY") genai.configure(api_key=GOOGLE_API_KEY) model = genai.GenerativeModel('gemini-pro') # Or 'gemini-pro-vision' # In-memory conversation store (replace with a database in production) conversations = {} def generate_llm_response_stream(conversation_id, user_message): """Generates a streaming response from Gemini LLM.""" global conversations if conversation_id not in conversations: conversations[conversation_id] = [] conversations[conversation_id].append({"role": "user", "parts": [user_message]}) try: chat = model.start_chat(history=conversations[conversation_id]) response = chat.send_message(user_message, stream=True) for chunk in response: llm_text = chunk.text conversations[conversation_id].append({"role": "model", "parts": [llm_text]}) # Store LLM response mcp_message = { "conversation_id": conversation_id, "message_type": "llm_response", "payload": llm_text } yield f"data: {json.dumps(mcp_message)}\n\n" time.sleep(0.1) # Simulate processing time mcp_done_message = { "conversation_id": conversation_id, "message_type": "done" } yield f"data: {json.dumps(mcp_done_message)}\n\n" except Exception as e: mcp_error_message = { "conversation_id": conversation_id, "message_type": "error", "payload": str(e) } yield f"data: {json.dumps(mcp_error_message)}\n\n" @app.route('/chat', methods=['POST']) def chat_handler(): """Handles the initial chat request and establishes the SSE stream.""" data = request.get_json() conversation_id = data.get('conversation_id') user_message = data.get('message') if not conversation_id or not user_message: return "Missing conversation_id or message", 400 def stream(): yield from generate_llm_response_stream(conversation_id, user_message) return Response(stream_with_context(stream()), mimetype='text/event-stream') if __name__ == '__main__': app.run(debug=True, port=5000) ``` **Client-Side Example (JavaScript):** ```javascript const conversationId = 'unique-conversation-id'; // Generate a unique ID const messageInput = document.getElementById('messageInput'); const chatOutput = document.getElementById('chatOutput'); const sendButton = document.getElementById('sendButton'); sendButton.addEventListener('click', () => { const message = messageInput.value; messageInput.value = ''; // 1. Send initial message via HTTP POST fetch('/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ conversation_id: conversationId, message: message }) }).then(() => { // 2. Establish SSE connection const eventSource = new EventSource('/chat'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log("Received SSE event:", data); if (data.message_type === 'llm_response') { chatOutput.textContent += data.payload; } else if (data.message_type === 'done') { console.log('LLM response complete.'); eventSource.close(); // Close the SSE connection } else if (data.message_type === 'error') { console.error('Error from server:', data.payload); chatOutput.textContent += `Error: ${data.payload}`; eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; }).catch(error => { console.error("Error sending initial message:", error); }); }); ``` **Key Improvements and Considerations:** * **Error Handling:** Robust error handling is crucial. The server should catch exceptions during LLM generation and send error events to the client. The client should display these errors to the user. * **Conversation History:** The server *must* maintain a conversation history to provide context for subsequent requests. This can be stored in a database. The example code uses an in-memory store, which is not suitable for production. * **Security:** Implement appropriate security measures, such as authentication and authorization, to protect against unauthorized access. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the server from the LLM API. This can improve scalability and resilience. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Prompt Engineering:** Experiment with different prompts to optimize the LLM's responses. * **Token Management:** Be mindful of token limits for both input and output with the LLM. Implement strategies to truncate or summarize the conversation history if necessary. * **User Interface:** Design a user interface that provides a clear and intuitive experience for the user. Consider adding features such as: * Loading indicators * Error messages * Conversation history * **Metadata:** Include relevant metadata in the MCP messages, such as timestamps, user IDs, and message IDs. This can be helpful for debugging and analysis. * **Chunking Strategy:** Experiment with different chunking strategies to optimize the streaming experience. Smaller chunks will result in a more responsive UI, but may also increase overhead. * **Cancellation:** Implement a mechanism for the user to cancel a long-running LLM request. This can be done by sending a cancellation signal to the server, which can then terminate the LLM request. * **Context Management:** Consider using a more sophisticated context management strategy, such as retrieval-augmented generation (RAG), to provide the LLM with access to external knowledge sources. This comprehensive pattern provides a solid foundation for building SSE-based MCP clients and servers using Gemini LLM. Remember to adapt the code and configurations to your specific needs and environment. Good luck!
MCP Coinbase Commerce Server
连接到 Coinbase Commerce API,允许像 Claude 这样的人工智能助手生成加密货币支付链接。
EVM MCP Server