MCP demo (DeepSeek as Client's LLM)

MCP demo (DeepSeek as Client's LLM)

Okay, I can help you outline the steps to run a minimal client-server demo using the DeepSeek API, focusing on the core concepts and providing example code snippets. Since I can't directly execute code or set up environments, I'll give you the instructions and code you'll need to adapt and run yourself. **Important Considerations Before You Start:** * **DeepSeek API Key:** You'll need a valid DeepSeek API key. Obtain one from the DeepSeek AI platform. Keep it secure and don't hardcode it directly into your scripts (use environment variables or configuration files). * **Python Environment:** I'll assume you're using Python. Make sure you have Python 3.7+ installed. * **Libraries:** You'll need the `requests` library for making HTTP requests to the DeepSeek API. Install it using `pip install requests`. You might also want `Flask` or `FastAPI` for a simple server. **Conceptual Overview** 1. **Client:** The client sends a request to the server. In this case, the request will contain a prompt that you want DeepSeek to complete. 2. **Server:** The server receives the request from the client, calls the DeepSeek API with the prompt, gets the response from DeepSeek, and sends the response back to the client. 3. **DeepSeek API:** This is the external service that performs the language model inference. **Step-by-Step Instructions and Code Examples** **1. Server (using Flask)** ```python # server.py from flask import Flask, request, jsonify import requests import os app = Flask(__name__) # Replace with your actual DeepSeek API key (ideally from an environment variable) DEEPSEEK_API_KEY = os.environ.get("DEEPSEEK_API_KEY") # Get from environment DEEPSEEK_API_URL = "https://api.deepseek.com/v1/chat/completions" # Replace if different @app.route('/generate', methods=['POST']) def generate_text(): try: data = request.get_json() prompt = data.get('prompt') if not prompt: return jsonify({'error': 'Prompt is required'}), 400 headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {DEEPSEEK_API_KEY}' } payload = { "model": "deepseek-chat", # Or another DeepSeek model "messages": [{"role": "user", "content": prompt}], "max_tokens": 200, # Adjust as needed "temperature": 0.7 # Adjust as needed } response = requests.post(DEEPSEEK_API_URL, headers=headers, json=payload) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) deepseek_data = response.json() generated_text = deepseek_data['choices'][0]['message']['content'] return jsonify({'generated_text': generated_text}) except requests.exceptions.RequestException as e: print(f"API Request Error: {e}") return jsonify({'error': f'API Request Error: {e}'}), 500 except Exception as e: print(f"Server Error: {e}") return jsonify({'error': f'Server Error: {e}'}), 500 if __name__ == '__main__': app.run(debug=True, port=5000) # Or any port you prefer ``` **Explanation of `server.py`:** * **Imports:** Imports necessary libraries (Flask, requests, json, os). * **API Key:** Retrieves the DeepSeek API key from an environment variable. **Never hardcode your API key directly in the script!** * **Flask App:** Creates a Flask web application. * **`/generate` Route:** Defines a route that listens for POST requests at `/generate`. * **Request Handling:** * Extracts the `prompt` from the JSON request body. * Constructs the headers for the DeepSeek API request, including the `Authorization` header with your API key. * Creates the payload (JSON data) for the DeepSeek API request. This includes the model name, the prompt (formatted as a message), and other parameters like `max_tokens` and `temperature`. * Sends the request to the DeepSeek API using `requests.post()`. * Handles potential errors (e.g., network issues, invalid API key). * **Response Handling:** * Parses the JSON response from the DeepSeek API. * Extracts the generated text from the response. The exact structure of the response depends on the DeepSeek API. The code assumes a structure like `deepseek_data['choices'][0]['message']['content']`. **You might need to adjust this based on the actual DeepSeek API response format.** * Returns the generated text as a JSON response to the client. * **Error Handling:** Includes `try...except` blocks to catch potential errors during the API request and server processing. Returns error messages to the client. * **Running the App:** Starts the Flask development server. **2. Client (using Python)** ```python # client.py import requests import json SERVER_URL = "http://localhost:5000/generate" # Adjust if your server is running on a different address/port def generate_text(prompt): try: payload = {'prompt': prompt} headers = {'Content-Type': 'application/json'} response = requests.post(SERVER_URL, headers=headers, data=json.dumps(payload)) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() generated_text = data.get('generated_text') return generated_text except requests.exceptions.RequestException as e: print(f"Request Error: {e}") return None except Exception as e: print(f"Error: {e}") return None if __name__ == '__main__': user_prompt = "Write a short story about a cat who goes on an adventure." generated_text = generate_text(user_prompt) if generated_text: print("Generated Text:") print(generated_text) else: print("Failed to generate text.") ``` **Explanation of `client.py`:** * **Imports:** Imports the `requests` and `json` libraries. * **`SERVER_URL`:** Defines the URL of the server's `/generate` endpoint. Make sure this matches the address and port where your server is running. * **`generate_text(prompt)` Function:** * Takes a `prompt` as input. * Constructs the payload (JSON data) to send to the server. * Sets the `Content-Type` header to `application/json`. * Sends a POST request to the server using `requests.post()`. * Handles potential errors (e.g., network issues, server not available). * Parses the JSON response from the server. * Extracts the `generated_text` from the response. * Returns the generated text. * **Main Execution Block:** * Sets a sample `user_prompt`. * Calls the `generate_text()` function to get the generated text. * Prints the generated text to the console. **3. Running the Demo** 1. **Set the API Key:** Before running anything, set the `DEEPSEEK_API_KEY` environment variable. How you do this depends on your operating system: * **Linux/macOS:** ```bash export DEEPSEEK_API_KEY="YOUR_DEEPSEEK_API_KEY" ``` * **Windows (Command Prompt):** ```cmd set DEEPSEEK_API_KEY=YOUR_DEEPSEEK_API_KEY ``` * **Windows (PowerShell):** ```powershell $env:DEEPSEEK_API_KEY="YOUR_DEEPSEEK_API_KEY" ``` **Replace `YOUR_DEEPSEEK_API_KEY` with your actual API key.** 2. **Run the Server:** Open a terminal or command prompt, navigate to the directory where you saved `server.py`, and run: ```bash python server.py ``` The Flask development server will start, and you'll see output indicating that it's running. 3. **Run the Client:** Open another terminal or command prompt, navigate to the directory where you saved `client.py`, and run: ```bash python client.py ``` The client will send a request to the server, the server will call the DeepSeek API, and the generated text will be printed to the client's console. **Important Notes and Troubleshooting** * **API Key:** Double-check that your API key is correct and that you've set the environment variable properly. An incorrect API key will result in an authentication error. * **Network Connectivity:** Make sure your server has internet access to reach the DeepSeek API. * **Error Messages:** Carefully examine any error messages you receive. They often provide clues about what's going wrong. * **DeepSeek API Response Format:** The code assumes a specific format for the DeepSeek API response. If the API changes its response format, you'll need to update the code accordingly. Refer to the DeepSeek API documentation for the correct format. * **Rate Limits:** Be aware of the DeepSeek API's rate limits. If you send too many requests in a short period, you might get rate-limited. Implement error handling and potentially retry logic to deal with rate limits. * **Security:** For production environments, use a more robust web server (like Gunicorn or uWSGI) instead of the Flask development server. Also, consider using HTTPS for secure communication between the client and server. * **Model Selection:** The code uses `"deepseek-chat"` as the model. Check the DeepSeek API documentation for other available models and their capabilities. * **Prompt Engineering:** The quality of the generated text depends heavily on the prompt you provide. Experiment with different prompts to get the best results. **Simplified Chinese Translation of Key Phrases** Here are some key phrases translated into Simplified Chinese: * **Prompt:** 提示 (tíshì) * **Generated Text:** 生成的文本 (shēngchéng de wénběn) * **API Key:** API 密钥 (API mìyào) * **Server:** 服务器 (fúwùqì) * **Client:** 客户端 (kèhùduān) * **Error:** 错误 (cuòwù) * **Request:** 请求 (qǐngqiú) * **Response:** 响应 (xiǎngyìng) * **Authentication:** 身份验证 (shēnfèn yànzhèng) * **Rate Limit:** 速率限制 (sùlǜ xiànzhì) This detailed guide should help you get started with a basic DeepSeek API client-server demo. Remember to adapt the code to your specific needs and consult the DeepSeek API documentation for the most up-to-date information. Good luck!

Bill-Cai

开发者工具
访问服务器

README

MCP 演示 (DeepSeek 作为客户端的 LLM)

如何运行

  1. 配置 .env 文件

    在项目根目录下添加 .env 文件,并添加 DeepSeek API 密钥:

    # DeepSeek
    DEEPSEEK_API_KEY=
    
  2. 运行 MCP 客户端 & 服务器

    $ pip install -r requirements.txt
    $ python client.py weather.py
    

参考

  1. https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python
  2. 基于DeepSeek-V3开发MCP天气查询智能体_python_深圳dengdi-DeepSeek技术社区

推荐服务器

Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
MCP Package Docs Server

MCP Package Docs Server

促进大型语言模型高效访问和获取 Go、Python 和 NPM 包的结构化文档,通过多语言支持和性能优化来增强软件开发。

精选
本地
TypeScript
Claude Code MCP

Claude Code MCP

一个实现了 Claude Code 作为模型上下文协议(Model Context Protocol, MCP)服务器的方案,它可以通过标准化的 MCP 接口来使用 Claude 的软件工程能力(代码生成、编辑、审查和文件操作)。

精选
本地
JavaScript
@kazuph/mcp-taskmanager

@kazuph/mcp-taskmanager

用于任务管理的模型上下文协议服务器。它允许 Claude Desktop(或任何 MCP 客户端)在基于队列的系统中管理和执行任务。

精选
本地
JavaScript
mermaid-mcp-server

mermaid-mcp-server

一个模型上下文协议 (MCP) 服务器,用于将 Mermaid 图表转换为 PNG 图像。

精选
JavaScript
Jira-Context-MCP

Jira-Context-MCP

MCP 服务器向 AI 编码助手(如 Cursor)提供 Jira 工单信息。

精选
TypeScript
Linear MCP Server

Linear MCP Server

一个模型上下文协议(Model Context Protocol)服务器,它与 Linear 的问题跟踪系统集成,允许大型语言模型(LLM)通过自然语言交互来创建、更新、搜索和评论 Linear 问题。

精选
JavaScript
Sequential Thinking MCP Server

Sequential Thinking MCP Server

这个服务器通过将复杂问题分解为顺序步骤来促进结构化的问题解决,支持修订,并通过完整的 MCP 集成来实现多条解决方案路径。

精选
Python
Curri MCP Server

Curri MCP Server

通过管理文本笔记、提供笔记创建工具以及使用结构化提示生成摘要,从而实现与 Curri API 的交互。

官方
本地
JavaScript