发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 12,316 个能力。
godoc-mcp-server
一个 MCP 服务器为所有 Go 语言程序员提供 pkg.go.dev 服务。 Or, a slightly more natural phrasing: MCP 服务器为所有 Go 语言开发者提供 pkg.go.dev 服务。
Rails MCP Server
一个 Ruby gem,实现了用于 Rails 项目的 Model Context Protocol (MCP) 服务器。该服务器允许 LLM(大型语言模型)通过 Model Context Protocol 与 Rails 项目进行交互。
Remote MCP Server on Cloudflare
MCP Workers AI
针对 Cloudflare Workers 的 MCP 服务器 SDK
MCP-Server-TESS
镜子 (jìng zi)
mcp-server
AI-Verse MCP Server
Model Context Protocol (MCP) Implementation
从零开始构建来学习 MCP (Minecraft Coder Pack)
MCP2HTTP
MCP2HTTP 是一个最小化的传输适配器,它将使用 stdio 的 MCP 客户端与无状态 HTTP 服务器连接起来。
Modular Outlook MCP Server
用于 Claude 通过 Microsoft Graph API 访问 Outlook 数据的 MCP 服务器
Dockerized Salesforce MCP Server
用于 REST API 集成的 Docker 化 Salesforce MCP 服务器
Hands-on MCP (Message Control Protocols) Guide
MCP 基础 (MCP jīchǔ)
@modelcontextprotocol/server-terminal
模型上下文协议的终端服务器实现
Zoom MCP Server
Zoom 的 MCP 服务器
MCP Expert Server
镜子 (jìng zi)
Google Forms MCP Server
谷歌表格 MCP 服务器 (Gǔgē biǎogé MCP fúwùqì)
Browser JavaScript Evaluator
这是一个 MCP 服务器的参考设计,该服务器托管一个网页,该网页通过 SSE 连接回服务器,并允许 Claude 在该页面上执行 JavaScript。
Basilisp nREPL MCP Bridge
简单的 MCP 服务器,用于 nREPL
GooseTeam
看,一群鹅!一个用于鹅代理协作的 MCP 服务器和协议。
MCPClient Python Application
Okay, I understand. You want an implementation that allows an MCP (presumably "Minecraft Protocol") server to interact with an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a *conceptual* implementation outline. Keep in mind that this is a high-level overview, and a complete, working solution would require significant coding effort. **Understanding the Components** * **MCP Server (Minecraft Protocol Server):** This is the server that handles Minecraft client connections, game logic, and world management. We need to be able to intercept or inject messages into this server. This likely requires a Minecraft server mod (e.g., using Fabric, Forge, or a custom server implementation). * **Ollama Model:** This is a large language model (LLM) served by Ollama. We need to be able to send text prompts to the Ollama API and receive text responses. * **Interaction:** The core of the problem is *how* the MCP server and Ollama model will interact. Here are some possibilities: * **Chatbot:** Players type commands or messages in the Minecraft chat, which are then sent to the Ollama model. The model's response is displayed back in the chat. * **NPC Dialogue:** Non-player characters (NPCs) in the game have dialogue powered by the Ollama model. The model generates responses based on player interactions or game events. * **World Generation/Modification:** The Ollama model could be used to generate descriptions of terrain, structures, or quests, which are then used to modify the Minecraft world. * **Game Logic:** The model could be used to make decisions for AI entities or influence game events based on player actions. **Conceptual Implementation Outline** This outline focuses on the "Chatbot" interaction, as it's the most straightforward to explain. 1. **Minecraft Server Mod (e.g., Fabric/Forge):** * **Dependency:** Add the necessary dependencies for your chosen mod loader (Fabric or Forge). * **Event Listener:** Create an event listener that intercepts chat messages sent by players. This is the crucial part where you "hook" into the Minecraft server. * **Command Handling (Optional):** Register a custom command (e.g., `/ask <prompt>`) that players can use to specifically trigger the Ollama model. This is cleaner than intercepting *all* chat messages. * **Configuration:** Allow configuration of the Ollama API endpoint (e.g., `http://localhost:11434/api/generate`). * **Asynchronous Task:** When a chat message (or command) is received, create an asynchronous task to send the prompt to the Ollama API. This prevents the Minecraft server from blocking while waiting for the model's response. 2. **Ollama API Interaction (Java/Kotlin Code within the Mod):** * **HTTP Client:** Use a Java HTTP client library (e.g., `java.net.http.HttpClient`, OkHttp, or Apache HttpClient) to make POST requests to the Ollama API. * **JSON Payload:** Construct a JSON payload for the `/api/generate` endpoint. The payload should include: * `model`: The name of the Ollama model to use (e.g., "llama2"). * `prompt`: The player's chat message (or the command argument). * (Optional) `stream`: Set to `false` for a single response, or `true` for streaming responses. * **Error Handling:** Implement robust error handling to catch network errors, API errors, and JSON parsing errors. * **Rate Limiting (Important):** Implement rate limiting to prevent overwhelming the Ollama server with requests. This is crucial for performance and stability. 3. **Response Handling:** * **Parse JSON Response:** Parse the JSON response from the Ollama API. The response will contain the generated text. * **Send Message to Minecraft Chat:** Send the generated text back to the Minecraft chat, either to the player who sent the original message or to all players. Use the Minecraft server's API to send chat messages. * **Formatting:** Format the response appropriately for the Minecraft chat (e.g., add a prefix to indicate that the message is from the Ollama model). **Example (Conceptual Java Code Snippet - Fabric Mod)** ```java import net.fabricmc.api.ModInitializer; import net.fabricmc.fabric.api.event.lifecycle.v1.ServerLifecycleEvents; import net.fabricmc.fabric.api.command.v2.CommandRegistrationCallback; import net.minecraft.server.MinecraftServer; import net.minecraft.server.network.ServerPlayerEntity; import net.minecraft.text.Text; import com.mojang.brigadier.CommandDispatcher; import com.mojang.brigadier.arguments.StringArgumentType; import static net.minecraft.server.command.CommandManager.*; import static com.mojang.brigadier.arguments.StringArgumentType.*; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.util.concurrent.CompletableFuture; import com.google.gson.Gson; import com.google.gson.JsonObject; public class OllamaMod implements ModInitializer { private static final String OLLAMA_API_URL = "http://localhost:11434/api/generate"; private static final String OLLAMA_MODEL = "llama2"; // Or your chosen model private static final HttpClient httpClient = HttpClient.newHttpClient(); private static final Gson gson = new Gson(); @Override public void onInitialize() { ServerLifecycleEvents.SERVER_STARTED.register(this::onServerStarted); CommandRegistrationCallback.EVENT.register(this::registerCommands); } private void onServerStarted(MinecraftServer server) { System.out.println("Ollama Mod Initialized!"); } private void registerCommands(CommandDispatcher<net.minecraft.server.command.ServerCommandSource> dispatcher, net.minecraft.server.command.CommandRegistryAccess registryAccess, net.minecraft.server.command.CommandManager.RegistrationEnvironment environment) { dispatcher.register(literal("ask") .then(argument("prompt", string()) .executes(context -> { String prompt = getString(context, "prompt"); ServerPlayerEntity player = context.getSource().getPlayer(); askOllama(prompt, player); return 1; }))); } private void askOllama(String prompt, ServerPlayerEntity player) { CompletableFuture.runAsync(() -> { try { JsonObject requestBody = new JsonObject(); requestBody.addProperty("model", OLLAMA_MODEL); requestBody.addProperty("prompt", prompt); requestBody.addProperty("stream", false); // Get a single response HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(OLLAMA_API_URL)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(gson.toJson(requestBody))) .build(); HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString()); if (response.statusCode() == 200) { JsonObject jsonResponse = gson.fromJson(response.body(), JsonObject.class); String ollamaResponse = jsonResponse.get("response").getAsString(); // Adjust based on Ollama's actual response format player.sendMessage(Text.literal("Ollama: " + ollamaResponse)); } else { player.sendMessage(Text.literal("Error communicating with Ollama: " + response.statusCode())); } } catch (Exception e) { player.sendMessage(Text.literal("An error occurred: " + e.getMessage())); e.printStackTrace(); } }); } } ``` **Key Considerations and Challenges** * **Asynchronous Operations:** Crucially important to avoid blocking the Minecraft server thread. Use `CompletableFuture` or similar mechanisms. * **Error Handling:** Network errors, API errors, JSON parsing errors – handle them all gracefully. * **Rate Limiting:** Protect the Ollama server from being overwhelmed. * **Security:** If you're exposing this to the internet, be very careful about security. Sanitize inputs to prevent prompt injection attacks. * **Ollama API Changes:** The Ollama API might change in the future, so keep your code up-to-date. * **Minecraft Server Version:** Ensure your mod is compatible with the specific version of Minecraft you're targeting. * **Mod Loader (Fabric/Forge):** Choose the mod loader that best suits your needs and experience. * **Context:** The Ollama model will perform better if you provide it with context about the game world, the player's inventory, and recent events. This requires more complex data gathering from the Minecraft server. * **Streaming Responses:** Consider using streaming responses from the Ollama API for a more interactive experience. This requires more complex handling of the response data. * **Resource Management:** Be mindful of memory usage, especially if you're using large models. **Next Steps** 1. **Choose a Mod Loader:** Fabric is generally considered more lightweight and modern, while Forge has a larger ecosystem of mods. 2. **Set up a Development Environment:** Follow the instructions for setting up a development environment for your chosen mod loader. 3. **Implement the Basic Chatbot Functionality:** Start with the code snippet above and get the basic chatbot working. 4. **Add Error Handling and Rate Limiting:** Make the code more robust. 5. **Experiment with Different Interaction Models:** Explore other ways to integrate the Ollama model into the game. 6. **Consider Context:** Add context to the prompts sent to the Ollama model to improve its responses. This is a challenging but rewarding project. Good luck! Remember to break the problem down into smaller, manageable steps.
Fiberflow MCP Gateway
通过标准输入/输出来运行 Fiberflow MCP SSE 服务器。
Corrode MCP Server
用 Rust 编写的简单编码 MCP 服务器
cloudflare-api-mcp
轻量级 MCP 服务器,让你的 Cursor Agent 能够访问 Cloudflare API。
iOS Simulator MCP Server
镜子 (jìng zi)
shopware-mcp
Shopware 的 MCP 服务器
NmapMCP
NmapMCP 是 Nmap 扫描工具与模型上下文协议 (MCP) 的强大集成,可在兼容 MCP 的环境中实现无缝的网络扫描功能。
Mo - Linear Task Management for Cursor IDE
用于人工智能驱动项目管理的 Linear<>Cursor MCP 服务器
Telegram MCP Server
一个 MCP 服务器实现,提供用于与 [Telegram Bot API](
MCP Server: PostgreSQL Docker Initializer
Excel MCP Server
镜子 (jìng zi)