发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 27,861 个能力。

全部27,861
content-core

content-core

Extract content from URLs, documents, videos, and audio files using intelligent auto-engine selection. Supports web pages, PDFs, Word docs, YouTube transcripts, and more with structured JSON responses.

Vibe Coding Documentation MCP (MUSE)

Vibe Coding Documentation MCP (MUSE)

Automatically collects, summarizes, and documents code from vibe coding sessions, generating multiple document types (README, DESIGN, TUTORIAL, etc.) and publishing them to platforms like Notion, GitHub Wiki, and Obsidian.

Remote MCP Server Authless

Remote MCP Server Authless

A tool that deploys a remote Model Context Protocol (MCP) server on Cloudflare Workers without authentication requirements, allowing developers to add custom tools that can be accessed from Claude Desktop or the Cloudflare AI Playground.

Magic UI MCP Server

Magic UI MCP Server

A foundation for building interactive applications using the Model Context Protocol that integrates AI capabilities with Magic UI components.

Minimal Godot MCP

Minimal Godot MCP

Provides instant GDScript syntax validation and diagnostics by bridging Godot's native Language Server Protocol to MCP clients. Enables real-time syntax checking in AI assistants without requiring custom plugins or context switching to the Godot editor.

Rebrandly MCP

Rebrandly MCP

This project implements a simple MCP server in Go that exposes a single tool (create_short_link) to generate short URLs using the Rebrandly API.

Moatless MCP Server

Moatless MCP Server

A Model Context Protocol (MCP) server for advanced code analysis and editing with semantic search capabilities, enabling AI assistants to perform complex code operations through a standardized interface.

NetEase ModSDK MCP Server

NetEase ModSDK MCP Server

Enables document retrieval, code generation, and code review for Minecraft China (NetEase) ModSDK development. It automates the creation of Mod projects, JSON configurations, and Python scripts while ensuring compliance with official development standards.

YouTrack MCP Server

YouTrack MCP Server

Enables AI assistants to interact with JetBrains YouTrack 2025.2 for managing issues, sprints, dependencies, time tracking, and knowledge base content. Supports CRUD operations, bulk commands, analytics with Gantt charts and critical path analysis, and covers ~80% of the YouTrack REST API.

Juspay MCP Tools

Juspay MCP Tools

Enables AI agents to interact with Juspay's payment processing APIs and merchant dashboard for managing orders, transactions, refunds, customers, gateways, and reporting through natural language.

FinData MCP

FinData MCP

FinData MCP gives AI agents access to market data, company fundamentals, and macroeconomic indicators via MCP. It covers stocks, ETFs, crypto, forex, commodities, and economic time series.

Memory MCP Server

Memory MCP Server

Provides dynamic short-term and long-term memory management with keyword-based relevance scoring, time-decay models, and trigger-based recall. Optimized for Chinese language support with jieba segmentation.

mcp-server-datahub

mcp-server-datahub

DataHub 的官方 MCP 服务器 (

Qiniu MCP Server

Qiniu MCP Server

Cloud Private Catalog MCP Server

Cloud Private Catalog MCP Server

An auto-generated MCP server that enables interaction with Google Cloud Private Catalog API, allowing users to manage private catalogs and services through natural language.

MCPClient Python Application

MCPClient Python Application

Okay, I understand. You want an implementation that allows an MCP (presumably "Minecraft Protocol") server to interact with an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a *conceptual* implementation outline. Keep in mind that this is a high-level overview, and a complete, working solution would require significant coding effort. **Understanding the Components** * **MCP Server (Minecraft Protocol Server):** This is the server that handles Minecraft client connections, game logic, and world management. We need to be able to intercept or inject messages into this server. This likely requires a Minecraft server mod (e.g., using Fabric, Forge, or a custom server implementation). * **Ollama Model:** This is a large language model (LLM) served by Ollama. We need to be able to send text prompts to the Ollama API and receive text responses. * **Interaction:** The core of the problem is *how* the MCP server and Ollama model will interact. Here are some possibilities: * **Chatbot:** Players type commands or messages in the Minecraft chat, which are then sent to the Ollama model. The model's response is displayed back in the chat. * **NPC Dialogue:** Non-player characters (NPCs) in the game have dialogue powered by the Ollama model. The model generates responses based on player interactions or game events. * **World Generation/Modification:** The Ollama model could be used to generate descriptions of terrain, structures, or quests, which are then used to modify the Minecraft world. * **Game Logic:** The model could be used to make decisions for AI entities or influence game events based on player actions. **Conceptual Implementation Outline** This outline focuses on the "Chatbot" interaction, as it's the most straightforward to explain. 1. **Minecraft Server Mod (e.g., Fabric/Forge):** * **Dependency:** Add the necessary dependencies for your chosen mod loader (Fabric or Forge). * **Event Listener:** Create an event listener that intercepts chat messages sent by players. This is the crucial part where you "hook" into the Minecraft server. * **Command Handling (Optional):** Register a custom command (e.g., `/ask <prompt>`) that players can use to specifically trigger the Ollama model. This is cleaner than intercepting *all* chat messages. * **Configuration:** Allow configuration of the Ollama API endpoint (e.g., `http://localhost:11434/api/generate`). * **Asynchronous Task:** When a chat message (or command) is received, create an asynchronous task to send the prompt to the Ollama API. This prevents the Minecraft server from blocking while waiting for the model's response. 2. **Ollama API Interaction (Java/Kotlin Code within the Mod):** * **HTTP Client:** Use a Java HTTP client library (e.g., `java.net.http.HttpClient`, OkHttp, or Apache HttpClient) to make POST requests to the Ollama API. * **JSON Payload:** Construct a JSON payload for the `/api/generate` endpoint. The payload should include: * `model`: The name of the Ollama model to use (e.g., "llama2"). * `prompt`: The player's chat message (or the command argument). * (Optional) `stream`: Set to `false` for a single response, or `true` for streaming responses. * **Error Handling:** Implement robust error handling to catch network errors, API errors, and JSON parsing errors. * **Rate Limiting (Important):** Implement rate limiting to prevent overwhelming the Ollama server with requests. This is crucial for performance and stability. 3. **Response Handling:** * **Parse JSON Response:** Parse the JSON response from the Ollama API. The response will contain the generated text. * **Send Message to Minecraft Chat:** Send the generated text back to the Minecraft chat, either to the player who sent the original message or to all players. Use the Minecraft server's API to send chat messages. * **Formatting:** Format the response appropriately for the Minecraft chat (e.g., add a prefix to indicate that the message is from the Ollama model). **Example (Conceptual Java Code Snippet - Fabric Mod)** ```java import net.fabricmc.api.ModInitializer; import net.fabricmc.fabric.api.event.lifecycle.v1.ServerLifecycleEvents; import net.fabricmc.fabric.api.command.v2.CommandRegistrationCallback; import net.minecraft.server.MinecraftServer; import net.minecraft.server.network.ServerPlayerEntity; import net.minecraft.text.Text; import com.mojang.brigadier.CommandDispatcher; import com.mojang.brigadier.arguments.StringArgumentType; import static net.minecraft.server.command.CommandManager.*; import static com.mojang.brigadier.arguments.StringArgumentType.*; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.util.concurrent.CompletableFuture; import com.google.gson.Gson; import com.google.gson.JsonObject; public class OllamaMod implements ModInitializer { private static final String OLLAMA_API_URL = "http://localhost:11434/api/generate"; private static final String OLLAMA_MODEL = "llama2"; // Or your chosen model private static final HttpClient httpClient = HttpClient.newHttpClient(); private static final Gson gson = new Gson(); @Override public void onInitialize() { ServerLifecycleEvents.SERVER_STARTED.register(this::onServerStarted); CommandRegistrationCallback.EVENT.register(this::registerCommands); } private void onServerStarted(MinecraftServer server) { System.out.println("Ollama Mod Initialized!"); } private void registerCommands(CommandDispatcher<net.minecraft.server.command.ServerCommandSource> dispatcher, net.minecraft.server.command.CommandRegistryAccess registryAccess, net.minecraft.server.command.CommandManager.RegistrationEnvironment environment) { dispatcher.register(literal("ask") .then(argument("prompt", string()) .executes(context -> { String prompt = getString(context, "prompt"); ServerPlayerEntity player = context.getSource().getPlayer(); askOllama(prompt, player); return 1; }))); } private void askOllama(String prompt, ServerPlayerEntity player) { CompletableFuture.runAsync(() -> { try { JsonObject requestBody = new JsonObject(); requestBody.addProperty("model", OLLAMA_MODEL); requestBody.addProperty("prompt", prompt); requestBody.addProperty("stream", false); // Get a single response HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(OLLAMA_API_URL)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(gson.toJson(requestBody))) .build(); HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString()); if (response.statusCode() == 200) { JsonObject jsonResponse = gson.fromJson(response.body(), JsonObject.class); String ollamaResponse = jsonResponse.get("response").getAsString(); // Adjust based on Ollama's actual response format player.sendMessage(Text.literal("Ollama: " + ollamaResponse)); } else { player.sendMessage(Text.literal("Error communicating with Ollama: " + response.statusCode())); } } catch (Exception e) { player.sendMessage(Text.literal("An error occurred: " + e.getMessage())); e.printStackTrace(); } }); } } ``` **Key Considerations and Challenges** * **Asynchronous Operations:** Crucially important to avoid blocking the Minecraft server thread. Use `CompletableFuture` or similar mechanisms. * **Error Handling:** Network errors, API errors, JSON parsing errors – handle them all gracefully. * **Rate Limiting:** Protect the Ollama server from being overwhelmed. * **Security:** If you're exposing this to the internet, be very careful about security. Sanitize inputs to prevent prompt injection attacks. * **Ollama API Changes:** The Ollama API might change in the future, so keep your code up-to-date. * **Minecraft Server Version:** Ensure your mod is compatible with the specific version of Minecraft you're targeting. * **Mod Loader (Fabric/Forge):** Choose the mod loader that best suits your needs and experience. * **Context:** The Ollama model will perform better if you provide it with context about the game world, the player's inventory, and recent events. This requires more complex data gathering from the Minecraft server. * **Streaming Responses:** Consider using streaming responses from the Ollama API for a more interactive experience. This requires more complex handling of the response data. * **Resource Management:** Be mindful of memory usage, especially if you're using large models. **Next Steps** 1. **Choose a Mod Loader:** Fabric is generally considered more lightweight and modern, while Forge has a larger ecosystem of mods. 2. **Set up a Development Environment:** Follow the instructions for setting up a development environment for your chosen mod loader. 3. **Implement the Basic Chatbot Functionality:** Start with the code snippet above and get the basic chatbot working. 4. **Add Error Handling and Rate Limiting:** Make the code more robust. 5. **Experiment with Different Interaction Models:** Explore other ways to integrate the Ollama model into the game. 6. **Consider Context:** Add context to the prompts sent to the Ollama model to improve its responses. This is a challenging but rewarding project. Good luck! Remember to break the problem down into smaller, manageable steps.

FastPostgresMCP

FastPostgresMCP

一个速度极快的 MCP 服务器,它使 AI 代理能够与多个 PostgreSQL 数据库交互,并提供列出表、检查模式、执行查询和运行事务的功能。

Weather MCP Server

Weather MCP Server

A simple demonstration server for the MCP Python SDK that provides weather alerts for locations, allowing users to query weather information through Claude Desktop or Cursor.

SVT Text-TV MCP Server

SVT Text-TV MCP Server

Provides access to Swedish Text-TV content from SVT including news, sports, weather forecasts, and TV schedules through the Model Context Protocol.

char-index-mcp

char-index-mcp

Precise character-level string indexing for LLMs. Provides tools for finding, extracting, and manipulating text by exact character position to solve position-based operations.

DDG MCP2 Server

DDG MCP2 Server

A basic MCP server built with FastMCP framework that provides simple utility tools including message echoing and server information retrieval. Supports both stdio and HTTP transports with Docker deployment capabilities.

Railway MCP Server

Railway MCP Server

Enables comprehensive management of Railway infrastructure, including projects, services, variables, and deployments, directly through the Railway GraphQL API. It is designed to work as a remote service over HTTP, allowing seamless integration with cloud-based MCP clients like claude.ai.

TyranoStudio MCP Server

TyranoStudio MCP Server

Enables comprehensive management of TyranoStudio visual novel projects including project creation, scenario editing with syntax validation, resource management, and TyranoScript development assistance. Supports project analysis, template generation, and Git integration for visual novel game development.

MCP SQL Server

MCP SQL Server

Enables querying and managing PostgreSQL and MySQL databases through natural language, supporting connection management, query execution, schema inspection, and parameterized queries with connection pooling.

mcp_server_local_files

mcp_server_local_files

本地文件系统 MCP 服务器 (Běn dì wénjiàn xìtǒng MCP fúwùqì)

Browser JavaScript Evaluator

Browser JavaScript Evaluator

这是一个 MCP 服务器的参考设计,该服务器托管一个网页,该网页通过 SSE 连接回服务器,并允许 Claude 在该页面上执行 JavaScript。

Horizon MCP

Horizon MCP

An MCP server that exposes the Horizon content aggregation and analysis pipeline as a suite of modular tools. It enables users to automate fetching, AI-based scoring, filtering, background enrichment, and summary generation for various data sources.

Cross-Platform PowerPoint MCP Server

Cross-Platform PowerPoint MCP Server

Enables users to create, edit, and manage PowerPoint presentations across Windows, macOS, and Linux through natural language via Claude Desktop. It provides comprehensive automation for slide management, text manipulation, and presentation styling using platform-specific adapters.

DNStwist MCP Server

DNStwist MCP Server

A server that integrates the dnstwist DNS fuzzing tool with MCP-compatible applications, enabling domain permutation analysis to detect typosquatting, phishing, and corporate espionage.

Cloudflare Remote MCP Proxy

Cloudflare Remote MCP Proxy

A template for deploying remote MCP servers to Cloudflare Workers without authentication using the Server-Sent Events (SSE) protocol. It allows users to host custom tools and connect them to MCP clients like Claude Desktop through a local proxy.