MCP with Langchain Sample Setup

MCP with Langchain Sample Setup

Okay, here's a sample setup for an MCP (presumably referring to a **Multi-Client Processing** or **Message Communication Protocol**) server and client, designed to be compatible with LangChain. This example focuses on a simple request-response pattern, suitable for offloading LangChain tasks to a separate process or machine. **Important Considerations:** * **Serialization:** LangChain objects can be complex. You'll need a robust serialization/deserialization method (e.g., `pickle`, `json`, `cloudpickle`) to send data between the server and client. `cloudpickle` is often preferred for its ability to handle more complex Python objects, including closures and functions. * **Error Handling:** Implement comprehensive error handling on both the server and client to gracefully manage exceptions and network issues. * **Security:** If you're transmitting data over a network, consider security measures like encryption (e.g., TLS/SSL) to protect sensitive information. * **Asynchronous Operations:** For better performance, especially with LangChain tasks that might be I/O bound, consider using asynchronous programming (e.g., `asyncio`). This example shows a basic synchronous version for clarity. * **Message Format:** Define a clear message format (e.g., JSON with specific keys) for requests and responses. * **LangChain Compatibility:** The key is to serialize the *input* to a LangChain component (like a Chain or LLM) on the client, send it to the server, deserialize it, run the LangChain component on the server, serialize the *output*, and send it back to the client. **Python Code (using `socket` module for simplicity):** **1. Server (server.py):** ```python import socket import pickle # Or json, cloudpickle import langchain import os # Example LangChain setup (replace with your actual chain) from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" # Replace with your actual API key llm = OpenAI(temperature=0.7) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) def process_langchain_request(data): """ Processes a LangChain request. This is the core logic on the server. """ try: # Deserialize the input (assuming it's a dictionary) input_data = pickle.loads(data) # Or json.loads(data) if using JSON # **Crucially, ensure the input_data matches what your LangChain component expects.** # For example, if your chain expects a dictionary with a "text" key: # input_text = input_data["text"] # Run the LangChain component result = chain.run(input_data["product"]) # Replace with your actual LangChain call # Serialize the result serialized_result = pickle.dumps(result) # Or json.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return pickle.dumps({"error": str(e)}) # Serialize the error message with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Server listening on {HOST}:{PORT}") conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(4096) # Adjust buffer size as needed if not data: break response = process_langchain_request(data) conn.sendall(response) ``` **2. Client (client.py):** ```python import socket import pickle # Or json, cloudpickle HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server def send_langchain_request(input_data): """ Sends a LangChain request to the server and returns the response. """ try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) # Serialize the input data serialized_data = pickle.dumps(input_data) # Or json.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) # Adjust buffer size as needed # Deserialize the response deserialized_response = pickle.loads(received) # Or json.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} if __name__ == "__main__": # Example usage input_data = {"product": "eco-friendly cleaning products"} # Replace with your actual input response = send_langchain_request(input_data) if "error" in response: print(f"Error from server: {response['error']}") else: print(f"Server response: {response}") ``` **How to Run:** 1. **Install LangChain:** `pip install langchain openai` 2. **Set your OpenAI API Key:** Replace `"YOUR_API_KEY"` in `server.py` with your actual OpenAI API key. 3. **Run the server:** `python server.py` 4. **Run the client:** `python client.py` **Explanation:** * **Server (`server.py`):** * Creates a socket and listens for incoming connections. * When a client connects, it receives data, deserializes it (using `pickle`), processes it using a LangChain component (in this case, a simple `LLMChain`), serializes the result, and sends it back to the client. * Includes basic error handling. * **Client (`client.py`):** * Creates a socket and connects to the server. * Serializes the input data (using `pickle`), sends it to the server, receives the response, deserializes it, and prints the result. * Includes basic error handling. * **Serialization:** `pickle` (or `json`, `cloudpickle`) is used to convert Python objects into a byte stream that can be sent over the network. The same method must be used for both serialization and deserialization. * **LangChain Integration:** The `process_langchain_request` function on the server is where the LangChain logic resides. It receives the serialized input, deserializes it, runs the LangChain component, and serializes the output. **Key Improvements and Considerations for Production:** * **Asynchronous Communication (using `asyncio`):** Use `asyncio` for non-blocking I/O, allowing the server to handle multiple clients concurrently. This significantly improves performance. * **Message Queues (e.g., RabbitMQ, Redis):** Instead of direct socket connections, use a message queue for more robust and scalable communication. This decouples the client and server and allows for asynchronous processing. * **gRPC:** Consider using gRPC for efficient and type-safe communication between the client and server. gRPC uses Protocol Buffers for serialization, which is generally faster and more compact than `pickle` or `json`. * **Authentication and Authorization:** Implement authentication and authorization to secure the server and prevent unauthorized access. * **Logging:** Use a logging library (e.g., `logging`) to record events and errors for debugging and monitoring. * **Configuration:** Use a configuration file (e.g., YAML, JSON) to store settings like the server address, port, and API keys. * **Monitoring:** Monitor the server's performance and resource usage to identify bottlenecks and potential issues. * **Data Validation:** Validate the input data on both the client and server to prevent errors and security vulnerabilities. * **Retry Logic:** Implement retry logic on the client to handle transient network errors. * **Heartbeat Mechanism:** Implement a heartbeat mechanism to detect and handle server failures. * **Cloudpickle:** For complex LangChain objects, especially those involving custom functions or classes, `cloudpickle` is often necessary to ensure proper serialization and deserialization. Install it with `pip install cloudpickle`. **Example using `cloudpickle`:** ```python # Server (server.py) import cloudpickle def process_langchain_request(data): try: input_data = cloudpickle.loads(data) result = chain.run(input_data["product"]) serialized_result = cloudpickle.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return cloudpickle.dumps({"error": str(e)}) # Client (client.py) import cloudpickle def send_langchain_request(input_data): try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) serialized_data = cloudpickle.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) deserialized_response = cloudpickle.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} ``` This more complete example provides a solid foundation for building a distributed LangChain application. Remember to adapt the code to your specific needs and consider the production-level improvements mentioned above. **Chinese Translation of Key Concepts:** * **MCP (Multi-Client Processing/Message Communication Protocol):** 多客户端处理/消息通信协议 (Duō kèhùduān chǔlǐ/Xiāoxī tōngxìn xiéyì) * **Serialization:** 序列化 (Xùlièhuà) * **Deserialization:** 反序列化 (Fǎn xùlièhuà) * **LangChain:** LangChain (No direct translation, use the English name) * **Socket:** 套接字 (Tàojiēzì) * **Asynchronous:** 异步 (Yìbù) * **Message Queue:** 消息队列 (Xiāoxī duìliè) * **gRPC:** gRPC (No direct translation, use the English name) * **Protocol Buffers:** 协议缓冲区 (Xiéyì huǎnchōngqū) * **Authentication:** 身份验证 (Shēnfèn yànzhèng) * **Authorization:** 授权 (Shòuquán) * **Logging:** 日志记录 (Rìzhì jìlù) * **Cloudpickle:** Cloudpickle (No direct translation, use the English name) This should give you a good starting point. Let me know if you have any more specific questions.

TaQuangTu

开发者工具
访问服务器

README

基于 Langchain 的 MCP 示例设置

启动 MCP 服务器

mcp_servers 文件夹中创建了 3 个示例 MCP 服务器,每个服务器有 1 或 2 个函数。 启动并监听 3 个不同端口上的请求:8000、8001 和 8002。

cd mcp_servers
nohup python math_mcp_server.py > math.log 2>&1 &
nohup python weather_mcp_server.py > weather.log 2>&1 &
nohup python which_llm_to_use_mcp_server.py > which_llm.log 2>&1 &

启动 MCP 客户端

MCP 客户端充当连接用户和 MCP 服务器的接口。

export OPENAI_API_KEY=sk-svcacct-Tn_rKHd............................your_key_please
streamlit run my_chat_bot_app.py

streamlit 命令将启动一个在端口 8501 上监听的 Web UI。 您可以从那里与机器人进行交互。

推荐服务器

Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
MCP Package Docs Server

MCP Package Docs Server

促进大型语言模型高效访问和获取 Go、Python 和 NPM 包的结构化文档,通过多语言支持和性能优化来增强软件开发。

精选
本地
TypeScript
Claude Code MCP

Claude Code MCP

一个实现了 Claude Code 作为模型上下文协议(Model Context Protocol, MCP)服务器的方案,它可以通过标准化的 MCP 接口来使用 Claude 的软件工程能力(代码生成、编辑、审查和文件操作)。

精选
本地
JavaScript
@kazuph/mcp-taskmanager

@kazuph/mcp-taskmanager

用于任务管理的模型上下文协议服务器。它允许 Claude Desktop(或任何 MCP 客户端)在基于队列的系统中管理和执行任务。

精选
本地
JavaScript
mermaid-mcp-server

mermaid-mcp-server

一个模型上下文协议 (MCP) 服务器,用于将 Mermaid 图表转换为 PNG 图像。

精选
JavaScript
Jira-Context-MCP

Jira-Context-MCP

MCP 服务器向 AI 编码助手(如 Cursor)提供 Jira 工单信息。

精选
TypeScript
Linear MCP Server

Linear MCP Server

一个模型上下文协议(Model Context Protocol)服务器,它与 Linear 的问题跟踪系统集成,允许大型语言模型(LLM)通过自然语言交互来创建、更新、搜索和评论 Linear 问题。

精选
JavaScript
Sequential Thinking MCP Server

Sequential Thinking MCP Server

这个服务器通过将复杂问题分解为顺序步骤来促进结构化的问题解决,支持修订,并通过完整的 MCP 集成来实现多条解决方案路径。

精选
Python
Curri MCP Server

Curri MCP Server

通过管理文本笔记、提供笔记创建工具以及使用结构化提示生成摘要,从而实现与 Curri API 的交互。

官方
本地
JavaScript