发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 13,631 个能力。
OpenAI MCP Example
这个项目展示了如何将 MCP 协议与 OpenAI 结合使用。它提供了一个简单的示例,演示如何通过 MCP 服务器和客户端无缝地与 OpenAI 的 API 进行交互。
🚀 JMeter MCP Server
镜子 (jìng zi)
Minimal MCP Server
好的,以下是一个 Model Context Protocol (MCP) 服务器的最小实现,用 Python 编写,并使用 `asyncio` 库进行异步操作。 这个例子展示了如何监听连接,接收消息,并发送简单的响应。 ```python import asyncio import json async def handle_client(reader, writer): """处理单个客户端连接.""" addr = writer.get_extra_info('peername') print(f"连接来自 {addr}") try: while True: data = await reader.read(1024) # 读取最多 1024 字节 if not data: break message = data.decode() print(f"接收到消息: {message}") try: # 尝试解析 JSON request = json.loads(message) # 在这里添加你的 MCP 逻辑 # 例如,根据请求类型执行不同的操作 if "method" in request: method = request["method"] if method == "ping": response = {"result": "pong"} elif method == "get_model_info": response = {"result": {"model_name": "MyModel", "version": "1.0"}} # 示例模型信息 else: response = {"error": "Unknown method"} else: response = {"error": "Invalid request"} response_json = json.dumps(response) writer.write(response_json.encode()) await writer.drain() # 刷新缓冲区 print(f"发送响应: {response_json}") except json.JSONDecodeError: print("接收到无效的 JSON") writer.write(b'{"error": "Invalid JSON"}') await writer.drain() except ConnectionError as e: print(f"连接错误: {e}") finally: print(f"关闭连接 {addr}") writer.close() await writer.wait_closed() async def main(): """启动服务器.""" server = await asyncio.start_server( handle_client, '127.0.0.1', 8888) # 监听本地地址 127.0.0.1 端口 8888 addr = server.sockets[0].getsockname() print(f'服务于 {addr}') async with server: await server.serve_forever() if __name__ == "__main__": asyncio.run(main()) ``` **代码解释:** 1. **`handle_client(reader, writer)` 函数:** - 这是处理每个客户端连接的主要函数。 - `reader` 和 `writer` 是 `asyncio.StreamReader` 和 `asyncio.StreamWriter` 对象,用于读取和写入数据。 - `writer.get_extra_info('peername')` 获取客户端的地址。 - `reader.read(1024)` 从客户端读取最多 1024 字节的数据。 - `data.decode()` 将接收到的字节数据解码为字符串。 - `json.loads(message)` 尝试将接收到的消息解析为 JSON 对象。 - **MCP 逻辑:** 这是代码的核心部分,你需要根据 MCP 协议的要求实现你的逻辑。 在这个例子中,它检查请求中是否存在 "method" 字段,并根据方法名称返回不同的响应。 `ping` 方法返回 `pong`,`get_model_info` 返回一个示例模型信息。 - `json.dumps(response)` 将响应转换为 JSON 字符串。 - `writer.write(response_json.encode())` 将 JSON 字符串编码为字节数据并发送给客户端。 - `await writer.drain()` 刷新缓冲区,确保数据被发送。 - `writer.close()` 关闭连接。 - `await writer.wait_closed()` 等待连接完全关闭。 - 异常处理:包括 `json.JSONDecodeError` 处理无效的 JSON,以及 `ConnectionError` 处理连接错误。 2. **`main()` 函数:** - `asyncio.start_server(handle_client, '127.0.0.1', 8888)` 启动一个 TCP 服务器,监听本地地址 127.0.0.1 的 8888 端口。 `handle_client` 函数将处理每个新的客户端连接。 - `server.serve_forever()` 运行服务器,直到手动停止。 3. **`if __name__ == "__main__":` 块:** - 确保 `asyncio.run(main())` 只在脚本直接运行时执行,而不是作为模块导入时执行。 **如何运行:** 1. **保存代码:** 将代码保存为 `mcp_server.py`。 2. **运行脚本:** 在终端中运行 `python mcp_server.py`。 **如何测试:** 你可以使用 `telnet` 或 `netcat` 等工具来连接服务器并发送 MCP 请求。 例如: ```bash telnet 127.0.0.1 8888 ``` 然后,你可以发送一个 JSON 请求,例如: ```json {"method": "ping"} ``` 服务器应该返回: ```json {"result": "pong"} ``` 或者发送: ```json {"method": "get_model_info"} ``` 服务器应该返回: ```json {"result": {"model_name": "MyModel", "version": "1.0"}} ``` **重要注意事项:** * **错误处理:** 这个例子只包含基本的错误处理。 在实际应用中,你需要添加更完善的错误处理机制,例如记录错误日志,返回更详细的错误信息给客户端。 * **安全性:** 这个例子没有考虑安全性。 在生产环境中,你需要采取安全措施,例如身份验证、授权、加密等。 * **MCP 协议:** 这个例子只是一个框架。 你需要根据 MCP 协议的规范来实现你的 MCP 逻辑。 这包括定义消息格式、方法名称、参数、返回值等。 * **异步编程:** `asyncio` 是一个强大的异步编程库。 如果你不熟悉 `asyncio`,建议先学习一下 `async` 和 `await` 的用法。 * **依赖:** 这个例子只依赖于 Python 的标准库 `asyncio` 和 `json`,不需要安装额外的依赖。 * **扩展性:** 这个例子是一个最小实现。 你可以根据需要扩展它,例如添加多线程支持、使用更高效的序列化方法、支持更多的 MCP 方法等。 **下一步:** 1. **定义你的 MCP 协议:** 明确你的 MCP 协议的规范,包括消息格式、方法名称、参数、返回值等。 2. **实现你的 MCP 逻辑:** 根据你的 MCP 协议,实现 `handle_client` 函数中的 MCP 逻辑。 3. **添加错误处理:** 添加更完善的错误处理机制。 4. **考虑安全性:** 采取安全措施,例如身份验证、授权、加密等。 5. **测试和调试:** 充分测试和调试你的服务器。 这个最小实现为你提供了一个起点。 你可以根据你的具体需求来扩展和修改它。
Remote MCP Server on Cloudflare
raindrop-mcp
Raindrop.io (书签服务) 的 MCP 服务器
ws-mcp

Mcp Namecheap Registrar
连接到 Namecheap API,用于检查域名可用性和价格,并注册域名。
Quarkus Model Context Protocol (MCP) Server
这个扩展程序使开发者能够轻松地实现 MCP 服务器的各项功能。
Local iMessage RAG MCP Server
来自 Anthropic MCP 黑客马拉松 (纽约) 的 iMessage RAG MCP 服务器
Parallels RAS MCP Server (Python)
Here are a few possible translations, depending on the nuance you want to convey: **Option 1 (Most Direct):** * **Simplified Chinese:** 使用 FastAPI 的 Parallels RAS 的 MCP 服务器 * **Traditional Chinese:** 使用 FastAPI 的 Parallels RAS 的 MCP 伺服器 This is a straightforward translation. It's clear and accurate. **Option 2 (Slightly More Natural, Emphasizing "Built with"):** * **Simplified Chinese:** 基于 FastAPI 的 Parallels RAS MCP 服务器 * **Traditional Chinese:** 基於 FastAPI 的 Parallels RAS MCP 伺服器 This option uses "基于" (jī yú) which translates to "based on" or "built upon." It implies that the MCP server is constructed using FastAPI. **Option 3 (More Descriptive, Emphasizing "For"):** * **Simplified Chinese:** 用于 Parallels RAS 的基于 FastAPI 的 MCP 服务器 * **Traditional Chinese:** 用於 Parallels RAS 的基於 FastAPI 的 MCP 伺服器 This option uses "用于" (yòng yú) which translates to "for" or "intended for." It emphasizes that the MCP server is designed to be used with Parallels RAS. **Which one should you use?** * If you just need a simple, direct translation, **Option 1** is fine. * If you want to emphasize that the server is *built using* FastAPI, **Option 2** is better. * If you want to emphasize that the server is *designed for* Parallels RAS, **Option 3** is best. In most cases, **Option 1 or 2** will be the most appropriate. Choose the one that best fits the context.
MCP Test Client
MCP 测试客户端是一个用于模型上下文协议 (MCP) 服务器的 TypeScript 测试实用程序。

Html2url
x64dbg MCP server
x64dbg 调试器的 MCP 服务器
Cursor Rust Tools
一个 MCP 服务器,用于让 Cursor 中的 LLM 访问 Rust Analyzer、Crate 文档和 Cargo 命令。
Build
Okay, I can help you understand how to use the TypeScript SDK to create different MCP (Mesh Configuration Protocol) servers. However, I need a little more context to give you the *most* helpful answer. Specifically, tell me: 1. **Which MCP SDK are you using?** There are several possibilities, including: * **Istio's MCP SDK (likely part of the `envoyproxy/go-control-plane` project, but you'd be using the TypeScript bindings).** This is the most common use case if you're working with Istio or Envoy. * **A custom MCP implementation.** If you're building your own MCP server from scratch, you'll need to define your own data structures and server logic. * **Another MCP SDK.** There might be other, less common, MCP SDKs available. 2. **What kind of MCP server do you want to create?** What specific resources will it serve? For example: * **Route Configuration (RDS) server:** Serves route configurations to Envoy proxies. * **Cluster Configuration (CDS) server:** Serves cluster definitions to Envoy proxies. * **Listener Configuration (LDS) server:** Serves listener configurations to Envoy proxies. * **Endpoint Discovery Service (EDS) server:** Serves endpoint information to Envoy proxies. * **A custom resource server:** Serves your own custom resource types. 3. **What is your desired level of detail?** Do you want: * **A high-level overview of the process?** * **Example code snippets?** * **A complete, runnable example?** (This would be more complex and require more information from you.) **General Steps (Assuming Istio/Envoy MCP):** Here's a general outline of the steps involved in creating an MCP server using a TypeScript SDK (assuming it's based on the Envoy/Istio MCP protocol): 1. **Install the Necessary Packages:** You'll need to install the appropriate TypeScript packages. This will likely involve: * The core gRPC library for TypeScript (`@grpc/grpc-js` or similar). * The generated TypeScript code from the Protocol Buffers (`.proto`) definitions for the MCP resources you want to serve (e.g., `envoy.config.route.v3`, `envoy.config.cluster.v3`, etc.). You'll typically use `protoc` (the Protocol Buffer compiler) and a TypeScript plugin to generate these files. * Potentially, a library that provides helper functions for working with MCP. ```bash npm install @grpc/grpc-js google-protobuf # And potentially other packages depending on your setup ``` 2. **Generate TypeScript Code from Protocol Buffers:** You'll need to obtain the `.proto` files that define the MCP resources (e.g., from the `envoyproxy/go-control-plane` repository or your own custom definitions). Then, use `protoc` to generate TypeScript code from these files. This will create the TypeScript classes that represent the resource types. Example `protoc` command (you'll need to adjust this based on your `.proto` file locations and plugin configuration): ```bash protoc --plugin=protoc-gen-ts=./node_modules/.bin/protoc-gen-ts --ts_out=. your_mcp_resource.proto ``` 3. **Implement the gRPC Service:** Create a TypeScript class that implements the gRPC service defined in the `.proto` files. This class will have methods that correspond to the MCP endpoints (e.g., `StreamRoutes`, `StreamClusters`, etc.). These methods will receive requests from Envoy proxies and return the appropriate resource configurations. 4. **Handle the MCP Stream:** The core of an MCP server is handling the bidirectional gRPC stream. Your service implementation will need to: * Receive `DiscoveryRequest` messages from the client (Envoy proxy). * Process the request, determining which resources the client is requesting. * Fetch the appropriate resource configurations from your data store (e.g., a database, a configuration file, or in-memory data). * Construct `DiscoveryResponse` messages containing the resource configurations. * Send the `DiscoveryResponse` messages back to the client. * Handle errors and stream termination gracefully. 5. **Manage Resource Versions (Important for Updates):** MCP uses versioning to ensure that clients receive consistent updates. You'll need to track the versions of your resources and include them in the `DiscoveryResponse` messages. When a client sends a `DiscoveryRequest`, it will include the version of the resources it currently has. Your server should only send updates if the client's version is out of date. 6. **Implement a Data Store (Configuration Source):** You'll need a way to store and manage the resource configurations that your MCP server serves. This could be a simple configuration file, a database, or a more complex configuration management system. 7. **Start the gRPC Server:** Use the gRPC library to start a gRPC server and register your service implementation with it. The server will listen for incoming connections from Envoy proxies. 8. **Configure Envoy to Use Your MCP Server:** Configure your Envoy proxies to connect to your MCP server. This will typically involve specifying the server's address and port in the Envoy configuration. **Example (Conceptual - Requires Adaptation):** ```typescript // Assuming you've generated TypeScript code from your .proto files // import { RouteDiscoveryServiceService, RouteDiscoveryServiceHandlers } from './route_discovery_grpc_pb'; // import { DiscoveryRequest, DiscoveryResponse } from './discovery_pb'; import * as grpc from '@grpc/grpc-js'; // Replace with your actual generated code interface DiscoveryRequest { versionInfo: string; node: any; // Replace with your Node type resourceNames: string[]; typeUrl: string; responseNonce: string; errorDetail: any; // Replace with your Status type } interface DiscoveryResponse { versionInfo: string; resources: any[]; // Replace with your Resource type typeUrl: string; nonce: string; controlPlane: any; // Replace with your ControlPlane type } interface RouteDiscoveryServiceHandlers { streamRoutes: grpc.ServerDuplexStream<DiscoveryRequest, DiscoveryResponse>; } class RouteDiscoveryServiceImpl implements RouteDiscoveryServiceHandlers { streamRoutes(stream: grpc.ServerDuplexStream<DiscoveryRequest, DiscoveryResponse>): void { stream.on('data', (request: DiscoveryRequest) => { console.log('Received request:', request); // Fetch route configurations based on the request const routes = this.fetchRoutes(request); // Construct the DiscoveryResponse const response: DiscoveryResponse = { versionInfo: 'v1', // Replace with your versioning logic resources: routes, typeUrl: 'envoy.config.route.v3.RouteConfiguration', // Replace with your resource type URL nonce: 'some-nonce', // Generate a unique nonce controlPlane: null, // Replace if you have control plane info }; stream.write(response); }); stream.on('end', () => { console.log('Stream ended'); stream.end(); }); stream.on('error', (err) => { console.error('Stream error:', err); stream.end(); }); } private fetchRoutes(request: DiscoveryRequest): any[] { // Implement your logic to fetch route configurations // based on the request parameters (e.g., resourceNames, versionInfo) // This is where you would access your data store. console.log("fetching routes"); return [ { name: 'route1', domains: ['example.com'] }, { name: 'route2', domains: ['test.com'] }, ]; // Replace with actual route configurations } } function main() { const server = new grpc.Server(); // server.addService(RouteDiscoveryServiceService, new RouteDiscoveryServiceImpl()); server.addService({streamRoutes: {path: "/envoy.service.discovery.v3.RouteDiscoveryService/StreamRoutes", requestStream: true, responseStream: true, requestSerialize: (arg: any) => Buffer.from(JSON.stringify(arg)), requestDeserialize: (arg: Buffer) => JSON.parse(arg.toString()), responseSerialize: (arg: any) => Buffer.from(JSON.stringify(arg)), responseDeserialize: (arg: Buffer) => JSON.parse(arg.toString())}}, new RouteDiscoveryServiceImpl()); server.bindAsync('0.0.0.0:50051', grpc.ServerCredentials.createInsecure(), (err, port) => { if (err) { console.error('Failed to bind:', err); return; } console.log(`Server listening on port ${port}`); server.start(); }); } main(); ``` **Important Considerations:** * **Error Handling:** Implement robust error handling to gracefully handle unexpected situations. * **Logging:** Add logging to help you debug and monitor your MCP server. * **Security:** Secure your gRPC server using TLS/SSL. * **Scalability:** Consider the scalability of your MCP server, especially if you're serving a large number of Envoy proxies. * **Testing:** Thoroughly test your MCP server to ensure that it's working correctly. **Next Steps:** 1. **Tell me which MCP SDK you're using.** 2. **Tell me what kind of MCP server you want to create.** 3. **Tell me your desired level of detail.** Once I have this information, I can provide you with more specific and helpful guidance.
ActionKit MCP Starter
Okay, here's a translation of "Starter code for a MCP server powered by ActionKit" into Chinese, along with a few options depending on the nuance you want to convey: **Option 1 (Most Literal):** * **基于 ActionKit 的 MCP 服务器的起始代码** * (Jīyú ActionKit de MCP fúwùqì de qǐshǐ dàimǎ) This is a direct translation. It's clear and understandable. **Option 2 (Slightly More Natural):** * **使用 ActionKit 构建的 MCP 服务器的初始代码** * (Shǐyòng ActionKit gòujiàn de MCP fúwùqì de chūshǐ dàimǎ) This emphasizes that ActionKit is used to *build* the server. "构建 (gòujiàn)" means "to build" or "to construct." **Option 3 (Focus on Getting Started):** * **用于 ActionKit 驱动的 MCP 服务器的入门代码** * (Yòng yú ActionKit qūdòng de MCP fúwùqì de rùmén dàimǎ) This emphasizes that the code is for *getting started* with an ActionKit-powered server. "入门 (rùmén)" means "entry-level" or "getting started." "驱动 (qūdòng)" means "driven by" or "powered by." **Option 4 (Concise and Common):** * **ActionKit MCP 服务器的初始代码** * (ActionKit MCP fúwùqì de chūshǐ dàimǎ) This is a more concise version, assuming the context makes it clear that the code is *for* the server. It's common to omit "基于" or "使用" in Chinese when it's implied. **Which option is best depends on the specific context:** * If you want to be very precise and literal, use Option 1. * If you want to emphasize the building aspect, use Option 2. * If you want to emphasize that it's for beginners, use Option 3. * If you want a concise and common phrasing, use Option 4. **Key Vocabulary:** * **起始代码 (qǐshǐ dàimǎ) / 初始代码 (chūshǐ dàimǎ):** Starter code, initial code * **MCP 服务器 (MCP fúwùqì):** MCP server * **基于 (jīyú):** Based on * **使用 (shǐyòng):** To use * **构建 (gòujiàn):** To build, to construct * **驱动 (qūdòng):** Driven by, powered by * **入门 (rùmén):** Entry-level, getting started I hope this helps! Let me know if you have any other questions.
Wisdom MCP Gateway
Enterpret Wisdom MCP SSE 服务器的 stdio 网关
MCP with Langchain Sample Setup
Okay, here's a sample setup for an MCP (presumably referring to a **Multi-Client Processing** or **Message Communication Protocol**) server and client, designed to be compatible with LangChain. This example focuses on a simple request-response pattern, suitable for offloading LangChain tasks to a separate process or machine. **Important Considerations:** * **Serialization:** LangChain objects can be complex. You'll need a robust serialization/deserialization method (e.g., `pickle`, `json`, `cloudpickle`) to send data between the server and client. `cloudpickle` is often preferred for its ability to handle more complex Python objects, including closures and functions. * **Error Handling:** Implement comprehensive error handling on both the server and client to gracefully manage exceptions and network issues. * **Security:** If you're transmitting data over a network, consider security measures like encryption (e.g., TLS/SSL) to protect sensitive information. * **Asynchronous Operations:** For better performance, especially with LangChain tasks that might be I/O bound, consider using asynchronous programming (e.g., `asyncio`). This example shows a basic synchronous version for clarity. * **Message Format:** Define a clear message format (e.g., JSON with specific keys) for requests and responses. * **LangChain Compatibility:** The key is to serialize the *input* to a LangChain component (like a Chain or LLM) on the client, send it to the server, deserialize it, run the LangChain component on the server, serialize the *output*, and send it back to the client. **Python Code (using `socket` module for simplicity):** **1. Server (server.py):** ```python import socket import pickle # Or json, cloudpickle import langchain import os # Example LangChain setup (replace with your actual chain) from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" # Replace with your actual API key llm = OpenAI(temperature=0.7) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) def process_langchain_request(data): """ Processes a LangChain request. This is the core logic on the server. """ try: # Deserialize the input (assuming it's a dictionary) input_data = pickle.loads(data) # Or json.loads(data) if using JSON # **Crucially, ensure the input_data matches what your LangChain component expects.** # For example, if your chain expects a dictionary with a "text" key: # input_text = input_data["text"] # Run the LangChain component result = chain.run(input_data["product"]) # Replace with your actual LangChain call # Serialize the result serialized_result = pickle.dumps(result) # Or json.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return pickle.dumps({"error": str(e)}) # Serialize the error message with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Server listening on {HOST}:{PORT}") conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(4096) # Adjust buffer size as needed if not data: break response = process_langchain_request(data) conn.sendall(response) ``` **2. Client (client.py):** ```python import socket import pickle # Or json, cloudpickle HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server def send_langchain_request(input_data): """ Sends a LangChain request to the server and returns the response. """ try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) # Serialize the input data serialized_data = pickle.dumps(input_data) # Or json.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) # Adjust buffer size as needed # Deserialize the response deserialized_response = pickle.loads(received) # Or json.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} if __name__ == "__main__": # Example usage input_data = {"product": "eco-friendly cleaning products"} # Replace with your actual input response = send_langchain_request(input_data) if "error" in response: print(f"Error from server: {response['error']}") else: print(f"Server response: {response}") ``` **How to Run:** 1. **Install LangChain:** `pip install langchain openai` 2. **Set your OpenAI API Key:** Replace `"YOUR_API_KEY"` in `server.py` with your actual OpenAI API key. 3. **Run the server:** `python server.py` 4. **Run the client:** `python client.py` **Explanation:** * **Server (`server.py`):** * Creates a socket and listens for incoming connections. * When a client connects, it receives data, deserializes it (using `pickle`), processes it using a LangChain component (in this case, a simple `LLMChain`), serializes the result, and sends it back to the client. * Includes basic error handling. * **Client (`client.py`):** * Creates a socket and connects to the server. * Serializes the input data (using `pickle`), sends it to the server, receives the response, deserializes it, and prints the result. * Includes basic error handling. * **Serialization:** `pickle` (or `json`, `cloudpickle`) is used to convert Python objects into a byte stream that can be sent over the network. The same method must be used for both serialization and deserialization. * **LangChain Integration:** The `process_langchain_request` function on the server is where the LangChain logic resides. It receives the serialized input, deserializes it, runs the LangChain component, and serializes the output. **Key Improvements and Considerations for Production:** * **Asynchronous Communication (using `asyncio`):** Use `asyncio` for non-blocking I/O, allowing the server to handle multiple clients concurrently. This significantly improves performance. * **Message Queues (e.g., RabbitMQ, Redis):** Instead of direct socket connections, use a message queue for more robust and scalable communication. This decouples the client and server and allows for asynchronous processing. * **gRPC:** Consider using gRPC for efficient and type-safe communication between the client and server. gRPC uses Protocol Buffers for serialization, which is generally faster and more compact than `pickle` or `json`. * **Authentication and Authorization:** Implement authentication and authorization to secure the server and prevent unauthorized access. * **Logging:** Use a logging library (e.g., `logging`) to record events and errors for debugging and monitoring. * **Configuration:** Use a configuration file (e.g., YAML, JSON) to store settings like the server address, port, and API keys. * **Monitoring:** Monitor the server's performance and resource usage to identify bottlenecks and potential issues. * **Data Validation:** Validate the input data on both the client and server to prevent errors and security vulnerabilities. * **Retry Logic:** Implement retry logic on the client to handle transient network errors. * **Heartbeat Mechanism:** Implement a heartbeat mechanism to detect and handle server failures. * **Cloudpickle:** For complex LangChain objects, especially those involving custom functions or classes, `cloudpickle` is often necessary to ensure proper serialization and deserialization. Install it with `pip install cloudpickle`. **Example using `cloudpickle`:** ```python # Server (server.py) import cloudpickle def process_langchain_request(data): try: input_data = cloudpickle.loads(data) result = chain.run(input_data["product"]) serialized_result = cloudpickle.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return cloudpickle.dumps({"error": str(e)}) # Client (client.py) import cloudpickle def send_langchain_request(input_data): try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) serialized_data = cloudpickle.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) deserialized_response = cloudpickle.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} ``` This more complete example provides a solid foundation for building a distributed LangChain application. Remember to adapt the code to your specific needs and consider the production-level improvements mentioned above. **Chinese Translation of Key Concepts:** * **MCP (Multi-Client Processing/Message Communication Protocol):** 多客户端处理/消息通信协议 (Duō kèhùduān chǔlǐ/Xiāoxī tōngxìn xiéyì) * **Serialization:** 序列化 (Xùlièhuà) * **Deserialization:** 反序列化 (Fǎn xùlièhuà) * **LangChain:** LangChain (No direct translation, use the English name) * **Socket:** 套接字 (Tàojiēzì) * **Asynchronous:** 异步 (Yìbù) * **Message Queue:** 消息队列 (Xiāoxī duìliè) * **gRPC:** gRPC (No direct translation, use the English name) * **Protocol Buffers:** 协议缓冲区 (Xiéyì huǎnchōngqū) * **Authentication:** 身份验证 (Shēnfèn yànzhèng) * **Authorization:** 授权 (Shòuquán) * **Logging:** 日志记录 (Rìzhì jìlù) * **Cloudpickle:** Cloudpickle (No direct translation, use the English name) This should give you a good starting point. Let me know if you have any more specific questions.
TypeScript MCP Server
Database Analyzer MCP Server
Intervals.icu MCP Server
镜子 (jìng zi)
Comedy MCP Server
Okay, here's a translation of the request "MCP server using C# SDK to enhance comments with jokes from JokeAPI.": **Simplified Chinese:** 使用 C# SDK 的 MCP 服务器,用 JokeAPI 的笑话来增强评论。 **Traditional Chinese:** 使用 C# SDK 的 MCP 伺服器,用 JokeAPI 的笑話來增強評論。 **Explanation of the translation choices:** * **MCP Server:** This is kept as "MCP 服务器/伺服器" as it's likely a specific term related to the project and should be recognizable. If you have more context about what "MCP" stands for, I can provide a more accurate translation. * **C# SDK:** This is kept as "C# SDK" as it's a standard technical term. * **Enhance comments:** "增强评论/增強評論" is a direct and common translation for "enhance comments." * **Jokes from JokeAPI:** "JokeAPI 的笑话/笑話" translates to "jokes from JokeAPI." Again, keeping "JokeAPI" as is since it's a proper noun. **Therefore, the translation means:** A MCP server that uses the C# SDK to add jokes from the JokeAPI to comments.

XFetch Mcp
更强大的 Fetch。 允许从任何网页检索内容,包括那些受 Cloudflare 和其他安全系统保护的网页。
CityGML MCP 서버
T2_C2
Please be more specific. "Server code for MCP" is too vague. MCP could refer to many things. To give you relevant information, I need to know which MCP you're referring to. For example, are you looking for: * **Minecraft Protocol (MCP):** If you're referring to the protocol used for Minecraft servers, I can provide information about how to implement a server that uses the Minecraft protocol. This is a complex topic, and I'd need to know which version of Minecraft you're targeting (e.g., 1.19.4, 1.20.1, etc.) and what language you want to use (e.g., Java, Python, C++). I can also provide links to existing libraries that handle the protocol. * **Minecraft Coder Pack (MCP):** If you're referring to the tool used to decompile, deobfuscate, and recompile Minecraft code, I can explain how it works and where to find it. However, I cannot provide the actual decompiled Minecraft code, as that would likely violate copyright. * **A specific application or system named MCP:** If you're referring to a specific application or system that uses the acronym MCP, please provide more context. What does the MCP do? Who developed it? Once you provide more details, I can give you a more helpful and accurate response. In the meantime, here's a general example of a very basic (and incomplete) Minecraft server in Python, just to illustrate the concept: ```python # This is a VERY basic example and not a functional Minecraft server. # It's just to illustrate the concept. You'll need a proper Minecraft # protocol implementation to build a real server. import socket HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 25565 # Port to listen on (non-privileged ports are > 1023) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Listening on {HOST}:{PORT}") conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(1024) if not data: break print(f"Received: {data}") # Here you would need to parse the Minecraft protocol data # and respond accordingly. This is where a library like # `mcproto` or similar would be essential. response = b"Hello, client!" # Example response conn.sendall(response) ``` **Important Considerations for a Real Minecraft Server:** * **Minecraft Protocol:** The Minecraft protocol is complex and binary. You'll need to understand how to encode and decode packets. * **Libraries:** Use a library to handle the protocol. Don't try to implement it from scratch unless you have a very good reason. * **Performance:** Minecraft servers need to handle many connections and process data quickly. Choose a language and libraries that are performant. * **Security:** Protect your server from attacks. * **World Generation:** You'll need to generate and manage the game world. * **Player Management:** Handle player authentication, inventory, and other player-related data. Please provide more information about what you're trying to do, and I can give you more specific guidance. --- **Translation to Chinese:** 请更具体地说明。“MCP的服务器代码”太笼统了。 MCP可能指代很多东西。 为了给您提供相关信息,我需要知道您指的是哪个MCP。 例如,您是否在寻找: * **Minecraft协议(MCP):** 如果您指的是用于Minecraft服务器的协议,我可以提供有关如何实现使用Minecraft协议的服务器的信息。 这是一个复杂的主题,我需要知道您要针对哪个版本的Minecraft(例如,1.19.4、1.20.1等)以及您想使用哪种语言(例如,Java,Python,C ++)。 我还可以提供指向现有库的链接,这些库可以处理该协议。 * **Minecraft Coder Pack(MCP):** 如果您指的是用于反编译,反混淆和重新编译Minecraft代码的工具,我可以解释它的工作原理以及在哪里可以找到它。 但是,我无法提供实际的反编译Minecraft代码,因为这可能会侵犯版权。 * **名为MCP的特定应用程序或系统:** 如果您指的是使用首字母缩写词MCP的特定应用程序或系统,请提供更多上下文。 MCP做什么? 谁开发的? 一旦您提供更多详细信息,我就可以给您更有效和准确的答复。 同时,这是一个非常基本的(且不完整的)Python中的Minecraft服务器示例,仅用于说明概念: ```python # 这是一个非常基本的例子,而不是一个功能齐全的Minecraft服务器。 # 这只是为了说明概念。 您需要一个适当的Minecraft协议实现来构建一个真正的服务器。 import socket HOST = '127.0.0.1' # 标准环回接口地址(localhost) PORT = 25565 # 要监听的端口(非特权端口> 1023) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Listening on {HOST}:{PORT}") conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(1024) if not data: break print(f"Received: {data}") # 在这里,您需要解析Minecraft协议数据 # 并做出相应的回应。 在这里,像`mcproto`这样的库 # 或类似的东西将是必不可少的。 response = b"Hello, client!" # 示例响应 conn.sendall(response) ``` **实际Minecraft服务器的重要注意事项:** * **Minecraft协议:** Minecraft协议复杂且是二进制的。 您需要了解如何编码和解码数据包。 * **库:** 使用库来处理协议。 除非您有充分的理由,否则不要尝试从头开始实现它。 * **性能:** Minecraft服务器需要处理许多连接并快速处理数据。 选择一种性能良好的语言和库。 * **安全性:** 保护您的服务器免受攻击。 * **世界生成:** 您需要生成和管理游戏世界。 * **玩家管理:** 处理玩家身份验证,库存和其他与玩家相关的数据。 请提供有关您要执行的操作的更多信息,我可以为您提供更具体的指导。
MCP Actions Adapter
一个简单的适配器,用于将 MCP 服务器转换为与 GPT actions 兼容的 API。
MCP MongoDB Integration
这个项目演示了 MongoDB 与模型上下文协议 (MCP) 的集成,旨在为 AI 助手提供数据库交互能力。
xtrace-mcp
OpenAPI MCP Server
允许人工智能使用简单语言来理解复杂的 OpenAPI。
Chrome MCP Server
用于 Chrome 扩展程序和 Claude AI 之间集成的 MCP 服务器