发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 18,496 个能力。

全部18,496
N8N MCP Server

N8N MCP Server

Enables management of multiple N8N workflow automation instances through MCP. Supports listing, creating, updating, deleting, executing workflows and monitoring their executions across different N8N environments.

Rebrandly MCP

Rebrandly MCP

This project implements a simple MCP server in Go that exposes a single tool (create_short_link) to generate short URLs using the Rebrandly API.

Weather MCP Server

Weather MCP Server

A simple demonstration server for the MCP Python SDK that provides weather alerts for locations, allowing users to query weather information through Claude Desktop or Cursor.

Content Manager MCP Server

Content Manager MCP Server

Enables comprehensive content management through Markdown processing, HTML rendering, intelligent fuzzy search, and document analysis. Supports frontmatter parsing, tag-based filtering, table of contents generation, and directory statistics for efficient content organization and discovery.

char-index-mcp

char-index-mcp

Precise character-level string indexing for LLMs. Provides tools for finding, extracting, and manipulating text by exact character position to solve position-based operations.

MCP Expert Server

MCP Expert Server

镜子 (jìng zi)

DDG MCP2 Server

DDG MCP2 Server

A basic MCP server built with FastMCP framework that provides simple utility tools including message echoing and server information retrieval. Supports both stdio and HTTP transports with Docker deployment capabilities.

mcp_server_local_files

mcp_server_local_files

本地文件系统 MCP 服务器 (Běn dì wénjiàn xìtǒng MCP fúwùqì)

GitLab MCP Server

GitLab MCP Server

Enables interaction with GitLab repositories through natural language, supporting project management, issue tracking, merge requests, file access, and repository operations. Includes a conversational agent interface with structured outputs for comprehensive GitLab workflow automation.

Vibe Coding Documentation MCP (MUSE)

Vibe Coding Documentation MCP (MUSE)

Automatically collects, summarizes, and documents code from vibe coding sessions, generating multiple document types (README, DESIGN, TUTORIAL, etc.) and publishing them to platforms like Notion, GitHub Wiki, and Obsidian.

Remote MCP Server Authless

Remote MCP Server Authless

A tool that deploys a remote Model Context Protocol (MCP) server on Cloudflare Workers without authentication requirements, allowing developers to add custom tools that can be accessed from Claude Desktop or the Cloudflare AI Playground.

Magic UI MCP Server

Magic UI MCP Server

A foundation for building interactive applications using the Model Context Protocol that integrates AI capabilities with Magic UI components.

mcp-server-datahub

mcp-server-datahub

DataHub 的官方 MCP 服务器 (

Qiniu MCP Server

Qiniu MCP Server

Minimal Godot MCP

Minimal Godot MCP

Provides instant GDScript syntax validation and diagnostics by bridging Godot's native Language Server Protocol to MCP clients. Enables real-time syntax checking in AI assistants without requiring custom plugins or context switching to the Godot editor.

Cloud Private Catalog MCP Server

Cloud Private Catalog MCP Server

An auto-generated MCP server that enables interaction with Google Cloud Private Catalog API, allowing users to manage private catalogs and services through natural language.

mcp-hydrolix

mcp-hydrolix

mcp-hydrolix

MCPClient Python Application

MCPClient Python Application

Okay, I understand. You want an implementation that allows an MCP (presumably "Minecraft Protocol") server to interact with an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a *conceptual* implementation outline. Keep in mind that this is a high-level overview, and a complete, working solution would require significant coding effort. **Understanding the Components** * **MCP Server (Minecraft Protocol Server):** This is the server that handles Minecraft client connections, game logic, and world management. We need to be able to intercept or inject messages into this server. This likely requires a Minecraft server mod (e.g., using Fabric, Forge, or a custom server implementation). * **Ollama Model:** This is a large language model (LLM) served by Ollama. We need to be able to send text prompts to the Ollama API and receive text responses. * **Interaction:** The core of the problem is *how* the MCP server and Ollama model will interact. Here are some possibilities: * **Chatbot:** Players type commands or messages in the Minecraft chat, which are then sent to the Ollama model. The model's response is displayed back in the chat. * **NPC Dialogue:** Non-player characters (NPCs) in the game have dialogue powered by the Ollama model. The model generates responses based on player interactions or game events. * **World Generation/Modification:** The Ollama model could be used to generate descriptions of terrain, structures, or quests, which are then used to modify the Minecraft world. * **Game Logic:** The model could be used to make decisions for AI entities or influence game events based on player actions. **Conceptual Implementation Outline** This outline focuses on the "Chatbot" interaction, as it's the most straightforward to explain. 1. **Minecraft Server Mod (e.g., Fabric/Forge):** * **Dependency:** Add the necessary dependencies for your chosen mod loader (Fabric or Forge). * **Event Listener:** Create an event listener that intercepts chat messages sent by players. This is the crucial part where you "hook" into the Minecraft server. * **Command Handling (Optional):** Register a custom command (e.g., `/ask <prompt>`) that players can use to specifically trigger the Ollama model. This is cleaner than intercepting *all* chat messages. * **Configuration:** Allow configuration of the Ollama API endpoint (e.g., `http://localhost:11434/api/generate`). * **Asynchronous Task:** When a chat message (or command) is received, create an asynchronous task to send the prompt to the Ollama API. This prevents the Minecraft server from blocking while waiting for the model's response. 2. **Ollama API Interaction (Java/Kotlin Code within the Mod):** * **HTTP Client:** Use a Java HTTP client library (e.g., `java.net.http.HttpClient`, OkHttp, or Apache HttpClient) to make POST requests to the Ollama API. * **JSON Payload:** Construct a JSON payload for the `/api/generate` endpoint. The payload should include: * `model`: The name of the Ollama model to use (e.g., "llama2"). * `prompt`: The player's chat message (or the command argument). * (Optional) `stream`: Set to `false` for a single response, or `true` for streaming responses. * **Error Handling:** Implement robust error handling to catch network errors, API errors, and JSON parsing errors. * **Rate Limiting (Important):** Implement rate limiting to prevent overwhelming the Ollama server with requests. This is crucial for performance and stability. 3. **Response Handling:** * **Parse JSON Response:** Parse the JSON response from the Ollama API. The response will contain the generated text. * **Send Message to Minecraft Chat:** Send the generated text back to the Minecraft chat, either to the player who sent the original message or to all players. Use the Minecraft server's API to send chat messages. * **Formatting:** Format the response appropriately for the Minecraft chat (e.g., add a prefix to indicate that the message is from the Ollama model). **Example (Conceptual Java Code Snippet - Fabric Mod)** ```java import net.fabricmc.api.ModInitializer; import net.fabricmc.fabric.api.event.lifecycle.v1.ServerLifecycleEvents; import net.fabricmc.fabric.api.command.v2.CommandRegistrationCallback; import net.minecraft.server.MinecraftServer; import net.minecraft.server.network.ServerPlayerEntity; import net.minecraft.text.Text; import com.mojang.brigadier.CommandDispatcher; import com.mojang.brigadier.arguments.StringArgumentType; import static net.minecraft.server.command.CommandManager.*; import static com.mojang.brigadier.arguments.StringArgumentType.*; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.util.concurrent.CompletableFuture; import com.google.gson.Gson; import com.google.gson.JsonObject; public class OllamaMod implements ModInitializer { private static final String OLLAMA_API_URL = "http://localhost:11434/api/generate"; private static final String OLLAMA_MODEL = "llama2"; // Or your chosen model private static final HttpClient httpClient = HttpClient.newHttpClient(); private static final Gson gson = new Gson(); @Override public void onInitialize() { ServerLifecycleEvents.SERVER_STARTED.register(this::onServerStarted); CommandRegistrationCallback.EVENT.register(this::registerCommands); } private void onServerStarted(MinecraftServer server) { System.out.println("Ollama Mod Initialized!"); } private void registerCommands(CommandDispatcher<net.minecraft.server.command.ServerCommandSource> dispatcher, net.minecraft.server.command.CommandRegistryAccess registryAccess, net.minecraft.server.command.CommandManager.RegistrationEnvironment environment) { dispatcher.register(literal("ask") .then(argument("prompt", string()) .executes(context -> { String prompt = getString(context, "prompt"); ServerPlayerEntity player = context.getSource().getPlayer(); askOllama(prompt, player); return 1; }))); } private void askOllama(String prompt, ServerPlayerEntity player) { CompletableFuture.runAsync(() -> { try { JsonObject requestBody = new JsonObject(); requestBody.addProperty("model", OLLAMA_MODEL); requestBody.addProperty("prompt", prompt); requestBody.addProperty("stream", false); // Get a single response HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(OLLAMA_API_URL)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(gson.toJson(requestBody))) .build(); HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString()); if (response.statusCode() == 200) { JsonObject jsonResponse = gson.fromJson(response.body(), JsonObject.class); String ollamaResponse = jsonResponse.get("response").getAsString(); // Adjust based on Ollama's actual response format player.sendMessage(Text.literal("Ollama: " + ollamaResponse)); } else { player.sendMessage(Text.literal("Error communicating with Ollama: " + response.statusCode())); } } catch (Exception e) { player.sendMessage(Text.literal("An error occurred: " + e.getMessage())); e.printStackTrace(); } }); } } ``` **Key Considerations and Challenges** * **Asynchronous Operations:** Crucially important to avoid blocking the Minecraft server thread. Use `CompletableFuture` or similar mechanisms. * **Error Handling:** Network errors, API errors, JSON parsing errors – handle them all gracefully. * **Rate Limiting:** Protect the Ollama server from being overwhelmed. * **Security:** If you're exposing this to the internet, be very careful about security. Sanitize inputs to prevent prompt injection attacks. * **Ollama API Changes:** The Ollama API might change in the future, so keep your code up-to-date. * **Minecraft Server Version:** Ensure your mod is compatible with the specific version of Minecraft you're targeting. * **Mod Loader (Fabric/Forge):** Choose the mod loader that best suits your needs and experience. * **Context:** The Ollama model will perform better if you provide it with context about the game world, the player's inventory, and recent events. This requires more complex data gathering from the Minecraft server. * **Streaming Responses:** Consider using streaming responses from the Ollama API for a more interactive experience. This requires more complex handling of the response data. * **Resource Management:** Be mindful of memory usage, especially if you're using large models. **Next Steps** 1. **Choose a Mod Loader:** Fabric is generally considered more lightweight and modern, while Forge has a larger ecosystem of mods. 2. **Set up a Development Environment:** Follow the instructions for setting up a development environment for your chosen mod loader. 3. **Implement the Basic Chatbot Functionality:** Start with the code snippet above and get the basic chatbot working. 4. **Add Error Handling and Rate Limiting:** Make the code more robust. 5. **Experiment with Different Interaction Models:** Explore other ways to integrate the Ollama model into the game. 6. **Consider Context:** Add context to the prompts sent to the Ollama model to improve its responses. This is a challenging but rewarding project. Good luck! Remember to break the problem down into smaller, manageable steps.

PDF Reader MCP Server

PDF Reader MCP Server

A Model Context Protocol server that extracts and processes content from PDF documents, providing text extraction, metadata retrieval, page-level processing, and PDF validation capabilities.

TyranoStudio MCP Server

TyranoStudio MCP Server

Enables comprehensive management of TyranoStudio visual novel projects including project creation, scenario editing with syntax validation, resource management, and TyranoScript development assistance. Supports project analysis, template generation, and Git integration for visual novel game development.

MCP SQL Server

MCP SQL Server

Enables querying and managing PostgreSQL and MySQL databases through natural language, supporting connection management, query execution, schema inspection, and parameterized queries with connection pooling.

Browser JavaScript Evaluator

Browser JavaScript Evaluator

这是一个 MCP 服务器的参考设计,该服务器托管一个网页,该网页通过 SSE 连接回服务器,并允许 Claude 在该页面上执行 JavaScript。

Yonote MCP Server

Yonote MCP Server

Provides API tools to interact with Yonote documents and collections, serving as an alternative to Notion with capabilities to list documents/collections and retrieve detailed document information.

Massive.com MCP Server

Massive.com MCP Server

Provides access to comprehensive financial market data from Massive.com API, including real-time and historical data for stocks, options, forex, crypto, trades, quotes, market snapshots, ticker details, dividends, splits, fundamentals, and market status through an LLM-friendly interface.

Memory MCP Server

Memory MCP Server

Provides dynamic short-term and long-term memory management with keyword-based relevance scoring, time-decay models, and trigger-based recall. Optimized for Chinese language support with jieba segmentation.

Document Organizer MCP Server

Document Organizer MCP Server

Enables systematic document organization with PDF-to-Markdown conversion, intelligent categorization, and automated workflow management. Supports project documentation standards and provides complete end-to-end document processing pipelines.

Mo - Linear Task Management for Cursor IDE

Mo - Linear Task Management for Cursor IDE

用于人工智能驱动项目管理的 Linear<>Cursor MCP 服务器

GooseTeam

GooseTeam

看,一群鹅!一个用于鹅代理协作的 MCP 服务器和协议。

AntBot MCP Server

AntBot MCP Server

Enables integration with AntBot, an AI-based RPA platform, allowing users to list, inspect, and execute automation projects with parameter support, while preventing duplicate executions and providing execution log access.

GIS MCP Server

GIS MCP Server

A Model Context Protocol server that connects LLMs to GIS operations, enabling AI assistants to perform accurate geospatial analysis including geometric operations, coordinate transformations, and spatial measurements.