MCPClient Python Application

MCPClient Python Application

Okay, I understand. You want an implementation that allows an MCP (presumably "Minecraft Protocol") server to interact with an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a *conceptual* implementation outline. Keep in mind that this is a high-level overview, and a complete, working solution would require significant coding effort. **Understanding the Components** * **MCP Server (Minecraft Protocol Server):** This is the server that handles Minecraft client connections, game logic, and world management. We need to be able to intercept or inject messages into this server. This likely requires a Minecraft server mod (e.g., using Fabric, Forge, or a custom server implementation). * **Ollama Model:** This is a large language model (LLM) served by Ollama. We need to be able to send text prompts to the Ollama API and receive text responses. * **Interaction:** The core of the problem is *how* the MCP server and Ollama model will interact. Here are some possibilities: * **Chatbot:** Players type commands or messages in the Minecraft chat, which are then sent to the Ollama model. The model's response is displayed back in the chat. * **NPC Dialogue:** Non-player characters (NPCs) in the game have dialogue powered by the Ollama model. The model generates responses based on player interactions or game events. * **World Generation/Modification:** The Ollama model could be used to generate descriptions of terrain, structures, or quests, which are then used to modify the Minecraft world. * **Game Logic:** The model could be used to make decisions for AI entities or influence game events based on player actions. **Conceptual Implementation Outline** This outline focuses on the "Chatbot" interaction, as it's the most straightforward to explain. 1. **Minecraft Server Mod (e.g., Fabric/Forge):** * **Dependency:** Add the necessary dependencies for your chosen mod loader (Fabric or Forge). * **Event Listener:** Create an event listener that intercepts chat messages sent by players. This is the crucial part where you "hook" into the Minecraft server. * **Command Handling (Optional):** Register a custom command (e.g., `/ask <prompt>`) that players can use to specifically trigger the Ollama model. This is cleaner than intercepting *all* chat messages. * **Configuration:** Allow configuration of the Ollama API endpoint (e.g., `http://localhost:11434/api/generate`). * **Asynchronous Task:** When a chat message (or command) is received, create an asynchronous task to send the prompt to the Ollama API. This prevents the Minecraft server from blocking while waiting for the model's response. 2. **Ollama API Interaction (Java/Kotlin Code within the Mod):** * **HTTP Client:** Use a Java HTTP client library (e.g., `java.net.http.HttpClient`, OkHttp, or Apache HttpClient) to make POST requests to the Ollama API. * **JSON Payload:** Construct a JSON payload for the `/api/generate` endpoint. The payload should include: * `model`: The name of the Ollama model to use (e.g., "llama2"). * `prompt`: The player's chat message (or the command argument). * (Optional) `stream`: Set to `false` for a single response, or `true` for streaming responses. * **Error Handling:** Implement robust error handling to catch network errors, API errors, and JSON parsing errors. * **Rate Limiting (Important):** Implement rate limiting to prevent overwhelming the Ollama server with requests. This is crucial for performance and stability. 3. **Response Handling:** * **Parse JSON Response:** Parse the JSON response from the Ollama API. The response will contain the generated text. * **Send Message to Minecraft Chat:** Send the generated text back to the Minecraft chat, either to the player who sent the original message or to all players. Use the Minecraft server's API to send chat messages. * **Formatting:** Format the response appropriately for the Minecraft chat (e.g., add a prefix to indicate that the message is from the Ollama model). **Example (Conceptual Java Code Snippet - Fabric Mod)** ```java import net.fabricmc.api.ModInitializer; import net.fabricmc.fabric.api.event.lifecycle.v1.ServerLifecycleEvents; import net.fabricmc.fabric.api.command.v2.CommandRegistrationCallback; import net.minecraft.server.MinecraftServer; import net.minecraft.server.network.ServerPlayerEntity; import net.minecraft.text.Text; import com.mojang.brigadier.CommandDispatcher; import com.mojang.brigadier.arguments.StringArgumentType; import static net.minecraft.server.command.CommandManager.*; import static com.mojang.brigadier.arguments.StringArgumentType.*; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.util.concurrent.CompletableFuture; import com.google.gson.Gson; import com.google.gson.JsonObject; public class OllamaMod implements ModInitializer { private static final String OLLAMA_API_URL = "http://localhost:11434/api/generate"; private static final String OLLAMA_MODEL = "llama2"; // Or your chosen model private static final HttpClient httpClient = HttpClient.newHttpClient(); private static final Gson gson = new Gson(); @Override public void onInitialize() { ServerLifecycleEvents.SERVER_STARTED.register(this::onServerStarted); CommandRegistrationCallback.EVENT.register(this::registerCommands); } private void onServerStarted(MinecraftServer server) { System.out.println("Ollama Mod Initialized!"); } private void registerCommands(CommandDispatcher<net.minecraft.server.command.ServerCommandSource> dispatcher, net.minecraft.server.command.CommandRegistryAccess registryAccess, net.minecraft.server.command.CommandManager.RegistrationEnvironment environment) { dispatcher.register(literal("ask") .then(argument("prompt", string()) .executes(context -> { String prompt = getString(context, "prompt"); ServerPlayerEntity player = context.getSource().getPlayer(); askOllama(prompt, player); return 1; }))); } private void askOllama(String prompt, ServerPlayerEntity player) { CompletableFuture.runAsync(() -> { try { JsonObject requestBody = new JsonObject(); requestBody.addProperty("model", OLLAMA_MODEL); requestBody.addProperty("prompt", prompt); requestBody.addProperty("stream", false); // Get a single response HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(OLLAMA_API_URL)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(gson.toJson(requestBody))) .build(); HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString()); if (response.statusCode() == 200) { JsonObject jsonResponse = gson.fromJson(response.body(), JsonObject.class); String ollamaResponse = jsonResponse.get("response").getAsString(); // Adjust based on Ollama's actual response format player.sendMessage(Text.literal("Ollama: " + ollamaResponse)); } else { player.sendMessage(Text.literal("Error communicating with Ollama: " + response.statusCode())); } } catch (Exception e) { player.sendMessage(Text.literal("An error occurred: " + e.getMessage())); e.printStackTrace(); } }); } } ``` **Key Considerations and Challenges** * **Asynchronous Operations:** Crucially important to avoid blocking the Minecraft server thread. Use `CompletableFuture` or similar mechanisms. * **Error Handling:** Network errors, API errors, JSON parsing errors – handle them all gracefully. * **Rate Limiting:** Protect the Ollama server from being overwhelmed. * **Security:** If you're exposing this to the internet, be very careful about security. Sanitize inputs to prevent prompt injection attacks. * **Ollama API Changes:** The Ollama API might change in the future, so keep your code up-to-date. * **Minecraft Server Version:** Ensure your mod is compatible with the specific version of Minecraft you're targeting. * **Mod Loader (Fabric/Forge):** Choose the mod loader that best suits your needs and experience. * **Context:** The Ollama model will perform better if you provide it with context about the game world, the player's inventory, and recent events. This requires more complex data gathering from the Minecraft server. * **Streaming Responses:** Consider using streaming responses from the Ollama API for a more interactive experience. This requires more complex handling of the response data. * **Resource Management:** Be mindful of memory usage, especially if you're using large models. **Next Steps** 1. **Choose a Mod Loader:** Fabric is generally considered more lightweight and modern, while Forge has a larger ecosystem of mods. 2. **Set up a Development Environment:** Follow the instructions for setting up a development environment for your chosen mod loader. 3. **Implement the Basic Chatbot Functionality:** Start with the code snippet above and get the basic chatbot working. 4. **Add Error Handling and Rate Limiting:** Make the code more robust. 5. **Experiment with Different Interaction Models:** Explore other ways to integrate the Ollama model into the game. 6. **Consider Context:** Add context to the prompts sent to the Ollama model to improve its responses. This is a challenging but rewarding project. Good luck! Remember to break the problem down into smaller, manageable steps.

spirita1204

开发者工具
访问服务器

README

MCPClient Python 应用程序

这是一个 Python 客户端应用程序,旨在与 MCP (模型上下文协议) 服务器进行交互。

功能

  • 异步通信: 使用 asyncio 实现客户端和服务器之间的非阻塞通信。
  • 可定制的服务器脚本: 客户端可以连接到基于 Python 和 JavaScript 的服务器脚本。
  • 工具管理: 动态获取并与连接的服务器上可用的工具进行交互。
  • 聊天界面: 提供一个简单的命令行界面,以对话形式与服务器进行交互。
  • 工具集成: 支持从服务器响应中提取 JSON 格式的工具调用并执行它们。
  • 环境变量加载: 支持使用 dotenv 包从 .env 文件加载环境变量。

要求

  • Python 3.7 或更高版本
  • asyncio 库 (Python 自带)
  • requests 用于向服务器发送 HTTP 请求
  • mcp (用于处理 MCP 通信的自定义库)
  • dotenv 用于环境变量管理

设置

  1. 克隆存储库 (或下载脚本文件) 到您的本地机器。

  2. 安装所需的依赖项:

    pip install -r requirements.txt
    
  3. 在根目录中创建一个 .env 文件 以加载必要的环境变量。 例如:

    BASE_URL=http://localhost:11434
    MODEL=llama3.2
    
  4. 运行客户端,并指定服务器脚本的路径:

    python client.py <server_script_path>
    

    服务器脚本可以是 Python .py 或 JavaScript .js 文件。

工作原理

  1. 连接到 MCP 服务器: 客户端通过标准输入/输出通道连接到服务器,使用提供的脚本 (.py.js)。
  2. 处理查询: 客户端将用户查询发送到服务器并接收响应。 可用工具会被列出,并且可以直接从助手的回复中调用。
  3. 工具执行: 如果响应包含有效的工具调用 (JSON 格式),客户端会提取该调用并在服务器上触发相应的工具。
  4. 交互: 客户端以对话形式与服务器进行交互,显示来自服务器工具的结果并继续对话。

示例工作流程

  1. 用户输入一个查询,例如:

    Question: What is the weather today?
    
  2. 客户端将查询发送到服务器,服务器会响应可用的工具和信息。

  3. 如果服务器建议使用天气工具,客户端会使用必要的参数执行该工具并显示结果。

  4. 客户端根据工具返回的新信息继续对话。 https://github.com/furey/mongodb-lens

推荐服务器

Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
MCP Package Docs Server

MCP Package Docs Server

促进大型语言模型高效访问和获取 Go、Python 和 NPM 包的结构化文档,通过多语言支持和性能优化来增强软件开发。

精选
本地
TypeScript
Claude Code MCP

Claude Code MCP

一个实现了 Claude Code 作为模型上下文协议(Model Context Protocol, MCP)服务器的方案,它可以通过标准化的 MCP 接口来使用 Claude 的软件工程能力(代码生成、编辑、审查和文件操作)。

精选
本地
JavaScript
@kazuph/mcp-taskmanager

@kazuph/mcp-taskmanager

用于任务管理的模型上下文协议服务器。它允许 Claude Desktop(或任何 MCP 客户端)在基于队列的系统中管理和执行任务。

精选
本地
JavaScript
mermaid-mcp-server

mermaid-mcp-server

一个模型上下文协议 (MCP) 服务器,用于将 Mermaid 图表转换为 PNG 图像。

精选
JavaScript
Jira-Context-MCP

Jira-Context-MCP

MCP 服务器向 AI 编码助手(如 Cursor)提供 Jira 工单信息。

精选
TypeScript
Linear MCP Server

Linear MCP Server

一个模型上下文协议(Model Context Protocol)服务器,它与 Linear 的问题跟踪系统集成,允许大型语言模型(LLM)通过自然语言交互来创建、更新、搜索和评论 Linear 问题。

精选
JavaScript
Sequential Thinking MCP Server

Sequential Thinking MCP Server

这个服务器通过将复杂问题分解为顺序步骤来促进结构化的问题解决,支持修订,并通过完整的 MCP 集成来实现多条解决方案路径。

精选
Python
Curri MCP Server

Curri MCP Server

通过管理文本笔记、提供笔记创建工具以及使用结构化提示生成摘要,从而实现与 Curri API 的交互。

官方
本地
JavaScript