发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 23,459 个能力。

全部23,459
MalwareAnalyzerMCP

MalwareAnalyzerMCP

A specialized MCP server for Claude Desktop that allows executing terminal commands for malware analysis with support for common analysis tools like file, strings, hexdump, objdump, and xxd.

Berlin Transport MCP Server

Berlin Transport MCP Server

Provides access to Berlin's public transport data through the VBB API, enabling users to search stops, get departures, and plan journeys across Berlin-Brandenburg.

MCP Scheduler

MCP Scheduler

A robust task scheduler server built with Model Context Protocol for scheduling and managing various types of automated tasks including shell commands, API calls, AI tasks, and reminders.

MCP Tools

MCP Tools

Provides context management and todo persistence with AI second opinions from ChatGPT and Claude. Enables saving code snippets, conversations, and todos across sessions with full-text search capabilities.

MCP Software Engineer

MCP Software Engineer

Enables Claude to function as a full-stack software engineer with comprehensive development capabilities including project creation, database management, frontend/backend development, testing, deployment, and DevOps operations across multiple frameworks and technologies.

Spryker Search Tool MCP Server

Spryker Search Tool MCP Server

Provides natural language search capabilities for Spryker GitHub repositories, packages, and public documentation. It enables code-level searches across Spryker organizations to help developers find relevant modules and technical information.

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

测试 MCP 服务器 (Cèshì MCP fúwùqì)

Github Mcp Server Review Tools

Github Mcp Server Review Tools

扩展了 GitHub MCP 服务器,增加了用于拉取请求审查评论功能的额外工具。

figma-mcp-flutter-test

figma-mcp-flutter-test

使用 Figma MCP 服务器,在 Flutter 中重现 Figma 设计的实验性项目。

testmcpgithubdemo1

testmcpgithubdemo1

从 MCP 服务器演示创建。

MCP Server Demo

MCP Server Demo

A Python-based Model Context Protocol server with Streamlit chat interface that allows users to manage a PostgreSQL database through both web UI and MCP tools, powered by Ollama for local LLM integration.

Choose MCP Server Setup

Choose MCP Server Setup

镜子 (jìng zi)

Modular MCP Server & Client

Modular MCP Server & Client

A modular and extensible tool server built on FastMCP that supports multiple tools organized across files and communicates via MCP protocol.

📚 PDF Reader MCP

📚 PDF Reader MCP

一个使用 Node.js/TypeScript 构建的 MCP 服务器,允许 AI 代理安全地读取 PDF 文件(本地或 URL),并提取文本、元数据或页数。使用 pdf-parse 库。

Md2svg-mcp

Md2svg-mcp

Converts Markdown text into customizable SVG images, supporting elements like headings, lists, and tables. It enables MCP clients to visualize Markdown content in a scalable vector graphic format with custom dimensions and padding.

KNMI MCP Server

KNMI MCP Server

Provides access to Dutch weather data (current conditions, forecasts, alerts, and historical data) via the KNMI API, with automatic location name resolution for Dutch cities.

Telegram MCP Server

Telegram MCP Server

MCP服务器向Telegram发送通知

Microsoft Excel MCP Server by CData

Microsoft Excel MCP Server by CData

Microsoft Excel MCP Server by CData

College Football MCP

College Football MCP

Provides real-time college football game scores, betting odds, player statistics, and team performance data through integration with The Odds API and CollegeFootballData API.

Google Calendar

Google Calendar

Jira-GitLab MCP Server

Jira-GitLab MCP Server

Integrates Jira and GitLab to enable AI agents to seamlessly manage issues, create branches, and automate SRE workflows from issue detection to fix deployment. Features AI-powered analysis for intelligent code generation and comprehensive automation across both platforms.

Xiaohongshu MCP Server

Xiaohongshu MCP Server

Enables automated interaction with Xiaohongshu (Little Red Book) social media platform through browser automation. Supports login management, status checking, and publishing text content with images to Xiaohongshu accounts.

fund-mcp

fund-mcp

A fund knowledge base server based on Model Context Protocol (MCP), providing query and retrieval functions of fund-related knowledge, and supporting multiple deployment modes and protocols.

macuse

macuse

Connect AI with any macOS app. Deep integration with native apps like Calendar, Mail, Notes, plus UI control for all applications. Works with Claude, Cursor, Raycast, and any MCP-compatible AI.

bazi

bazi

Bazi MCP is an AI based Bazi calculator that provides accurate Bazi chart data for personality analysis and destiny prediction.

Furikake

Furikake

A local CLI & API for MCP management that allows users to download, install, manage, and interact with MCPs from GitHub, featuring process state management, port allocation, and HTTP API routes.

MCP Refactoring

MCP Refactoring

Enables LLMs to apply Martin Fowler's 71+ refactoring patterns to codebases through a pluggable, language-agnostic architecture. Supports previewing and applying refactorings, analyzing code smells, and inspecting code structure with safe-by-default operations.

Bangumi TV MCP Service

Bangumi TV MCP Service

Provides MCP access to the BangumiTV API, allowing users to retrieve and interact with anime, manga, music, and game information through natural language queries.

Task Researcher

Task Researcher

好的,这是对您描述的翻译: **中文翻译:** 人工智能编码研究员,负责分析任务复杂度,并运行深度研究(STORM),将复杂任务分解为子任务,作为 MCP 服务器或命令行界面 (CLI)。 **一些补充说明,以确保翻译的准确性:** * **人工智能编码研究员 (Rén gōng zhì néng biān mǎ yán jiū yuán):** This is a direct translation of "AI Coding Researcher." * **分析任务复杂度 (Fēn xī rèn wù fù zá dù):** This translates to "analyzes task complexity." * **运行深度研究 (Yùn xíng shēn dù yán jiū):** This translates to "runs deep research." * **STORM:** Since "STORM" is an acronym, you might consider keeping it in English, or providing a Chinese translation of what it stands for if it's relevant to your audience. For example, if STORM stands for "Systematic Task Organization and Refinement Method," you could add "(系统化任务组织与精炼方法)" after it. However, if it's just a project name, keeping it as "STORM" is fine. * **将复杂任务分解为子任务 (Jiāng fù zá rèn wù fēn jiě wéi zǐ rèn wù):** This translates to "decompose complex tasks into subtasks." * **MCP 服务器 (MCP fú wù qì):** This translates to "MCP Server." * **命令行界面 (Mìng lìng háng jiè miàn):** This translates to "Command Line Interface (CLI)." You can also use the abbreviation "CLI" directly in Chinese, as it's becoming increasingly common in technical contexts. Therefore, a slightly more nuanced translation could be: **人工智能编码研究员,负责分析任务复杂度,并运行深度研究 (STORM),将复杂任务分解为子任务,并将其实现为 MCP 服务器或命令行界面 (CLI)。** This adds the phrase "并将其实现为" (bìng jiāng qí shí xiàn wéi), which means "and implement it as," making the sentence flow a bit better.

McpDocs

McpDocs

Okay, this is a complex task involving several moving parts. Here's a breakdown of how you can provide Elixir project documentation (including dependencies) to an LLM via an SSE (Server-Sent Events) MCP (Message Channel Protocol) server. I'll outline the steps, tools, and considerations. **Conceptual Overview** 1. **Documentation Extraction:** You need to extract the documentation from your Elixir project and its dependencies. This involves parsing the Elixir code and extracting the `@doc` attributes, type specifications, and module/function signatures. 2. **Data Formatting:** The extracted documentation needs to be formatted into a structured format suitable for an LLM. JSON is a common choice. Consider including metadata like module name, function name, arity, and the actual documentation string. 3. **SSE Server:** You'll need an Elixir-based SSE server that can stream the formatted documentation to the LLM. This server will listen for a connection from the LLM and then push the documentation data as SSE events. 4. **MCP Integration (if needed):** If you need to use MCP, you'll need to integrate an MCP library into your Elixir SSE server. MCP provides a standardized way for clients (like your LLM) to discover and connect to services. 5. **LLM Integration:** The LLM needs to be configured to connect to the SSE server (and potentially use MCP to discover it) and consume the SSE events. The LLM will then process the documentation data to learn about your project. **Detailed Steps and Code Examples** **1. Documentation Extraction** * **Using `ExDoc` (Recommended):** `ExDoc` is the standard documentation generator for Elixir projects. It can generate HTML documentation, but more importantly, it provides an API for programmatically accessing the documentation data. * **Add `ExDoc` to your `mix.exs`:** ```elixir def deps do [ {:ex_doc, "~> 0.31", only: :dev, runtime: false} ] end ``` * **Use `ExDoc.Markdown.parse/1` and `ExDoc.Type.to_string/1`:** You can use `ExDoc`'s internal functions to parse the documentation strings and type specifications. This gives you a structured representation of the documentation. * **Example (Conceptual):** ```elixir defmodule DocExtractor do require Logger def extract_docs(project_path) do # This is a simplified example. You'll need to adapt it to your project structure. Mix.Task.run("compile", ["--warnings-as-errors"]) # Ensure code is compiled Mix.Task.run("docs", []) # Generate docs (necessary for ExDoc to work) Enum.map(Mix.Project.config()[:modules], fn module -> try do module |> Module.concat(".ex") |> Code.require_file() module |> Module.concat(".ex") |> File.read!() |> extract_module_docs(module) rescue e -> Logger.error("Error extracting docs for #{module}: #{inspect(e)}") [] end end) |> List.flatten() end defp extract_module_docs(file_content, module) do # This is a very basic example. You'll need to use a proper Elixir parser # (like `Code.string_to_quoted/1` and then traverse the AST) to reliably # extract the @doc attributes and function definitions. This is a complex task. # The following is a placeholder. # Example using Regex (fragile, but illustrative): Regex.scan(~r/@doc """(.*?)"""\s+def\s+(.*?)\(/ms, file_content) |> Enum.map(fn [doc, function_name] -> %{ module: module, function: function_name, doc: String.trim(doc) } end) end end # Example usage: # docs = DocExtractor.extract_docs(".") # "." is the current project directory # IO.inspect(docs) ``` * **Important Considerations for `ExDoc`:** * `ExDoc` relies on the code being compiled and the documentation being generated. Make sure you run `mix compile` and `mix docs` before attempting to extract the documentation programmatically. * Directly accessing `ExDoc`'s internal data structures can be fragile, as the internal implementation might change between versions. Consider using the HTML output and parsing it if a more stable API is needed. * Handling dependencies: You'll need to iterate through your project's dependencies and extract documentation from them as well. This might involve finding the dependency's source code and running `ExDoc` on it. * **Alternative: AST Parsing (Advanced):** You can use Elixir's `Code.string_to_quoted/1` function to parse the Elixir code into an Abstract Syntax Tree (AST). You can then traverse the AST to find `@doc` attributes, function definitions, and type specifications. This approach is more robust but also more complex. Libraries like `Macro` can help with AST manipulation. **2. Data Formatting (JSON)** * **Example JSON Structure:** ```json [ { "module": "MyModule", "function": "my_function", "arity": 1, "doc": "This function does something.", "spec": "my_function(integer) :: string" }, { "module": "MyModule", "function": "another_function", "arity": 2, "doc": "This function does something else.", "spec": "another_function(string, boolean) :: atom" } ] ``` * **Elixir Code to Generate JSON:** ```elixir defmodule JsonFormatter do def format_docs(docs) do docs |> Enum.map(fn doc -> %{ module: doc.module, function: doc.function, arity: doc.arity || 0, # Add arity if available doc: doc.doc, spec: doc.spec || "" # Add spec if available } end) |> Jason.encode! # Use Jason or Poison for JSON encoding end end ``` **3. SSE Server** * **Using `Plug` and `Cowboy`:** `Plug` is a specification for building web applications in Elixir, and `Cowboy` is a popular web server that implements the `Plug` specification. * **Add dependencies to `mix.exs`:** ```elixir def deps do [ {:plug, "~> 1.14"}, {:cowboy, "~> 2.10"}, {:jason, "~> 1.4"} # For JSON encoding ] end ``` * **Create a Plug module:** ```elixir defmodule DocServer do use Plug.Router require Logger def init(_opts) do :ok end def call(conn, _opts) do dispatch(conn) end plug Plug.Logger plug :match plug :dispatch get "/docs" do conn = conn |> Plug.Conn.put_resp_content_type("text/event-stream") |> Plug.Conn.put_resp_header("cache-control", "no-cache") |> Plug.Conn.send_resp(200, stream_docs()) end match _ do send_resp(conn, 404, "Not Found") end defp stream_docs() do # Replace this with your actual documentation extraction and formatting logic docs = DocExtractor.extract_docs(".") # Extract docs from the project json_string = JsonFormatter.format_docs(docs) # Format as JSON # Split the JSON string into smaller chunks for streaming chunks = String.split(json_string, "") |> Enum.chunk_every(500) Enum.reduce(chunks, "", fn chunk, acc -> event_data = "data: " <> Enum.join(chunk, "") <> "\n\n" acc <> event_data end) end end ``` * **Start the server in your `application.ex`:** ```elixir def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: DocServer, options: [port: 4000]} ] Supervisor.start_link(children, strategy: :one_for_one) end ``` * **Explanation:** * The `/docs` endpoint serves the SSE stream. * `Plug.Conn.put_resp_content_type("text/event-stream")` sets the correct content type for SSE. * `Plug.Conn.put_resp_header("cache-control", "no-cache")` disables caching. * `stream_docs()` is where you'll extract, format, and stream the documentation. The example shows how to split the JSON string into chunks to avoid sending very large events. * Each SSE event is formatted as `data: <your_data>\n\n`. **4. MCP Integration (Optional)** * **Choose an MCP Library:** There are several MCP libraries available for Elixir. Research and choose one that suits your needs. Some options might include libraries that wrap existing MCP implementations. * **Integrate the Library:** Follow the library's documentation to integrate it into your SSE server. This will typically involve: * Adding the library as a dependency in `mix.exs`. * Configuring the MCP server address and other settings. * Registering your SSE service with the MCP server. This allows the LLM to discover your service. * **Example (Conceptual - using a hypothetical MCP library):** ```elixir defmodule DocServer do use Plug.Router require Logger @mcp_server "mcp.example.com:8080" # Replace with your MCP server address def init(_opts) do # Register the service with the MCP server :ok = MCP.register_service(@mcp_server, "elixir-doc-server", "/docs") :ok end # ... (rest of the DocServer code) ... end ``` **5. LLM Integration** * **LLM Configuration:** Configure your LLM to connect to the SSE server. This will typically involve: * Providing the URL of the SSE endpoint (e.g., `http://localhost:4000/docs`). * If using MCP, configuring the LLM to use the MCP server to discover the service. * **SSE Event Handling:** The LLM needs to be able to parse the SSE events and extract the JSON data. Most LLM frameworks have libraries or built-in support for handling SSE streams. * **Data Processing:** The LLM will then process the JSON data to learn about your Elixir project's documentation. This might involve: * Indexing the documentation for efficient retrieval. * Using the documentation to answer questions about the project. * Using the documentation to generate code or documentation. **Important Considerations and Best Practices** * **Error Handling:** Implement robust error handling throughout the process. Log errors to help with debugging. * **Security:** If you're exposing the SSE server to the internet, consider security measures such as authentication and authorization. * **Scalability:** If you need to handle a large number of LLM connections, consider using a more scalable web server than `Cowboy` (e.g., `Phoenix` with `Cowboy2`). * **Chunking:** Sending large SSE events can cause performance problems. Chunk the data into smaller events. * **Heartbeats:** Implement a heartbeat mechanism to ensure that the connection between the LLM and the SSE server is still alive. The server can send a periodic "ping" event, and the LLM can respond with a "pong" event. * **Dependencies:** Carefully manage your project's dependencies. Use a dependency management tool like `mix` to ensure that you're using compatible versions of all libraries. * **Testing:** Write unit tests and integration tests to ensure that your code is working correctly. * **Rate Limiting:** Implement rate limiting on the SSE server to prevent the LLM from overwhelming the server with requests. * **Documentation Updates:** Consider how you'll handle documentation updates. You might need to implement a mechanism to notify the LLM when the documentation has changed. **Example LLM Integration (Conceptual - Python)** ```python import sseclient import requests import json url = 'http://localhost:4000/docs' # Replace with your SSE server URL try: response = requests.get(url, stream=True) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) client = sseclient.SSEClient(response) for event in client.events(): try: data = json.loads(event.data) print(f"Received data: {data}") # Process the documentation data here (e.g., index it, use it for QA) except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}, data: {event.data}") except Exception as e: print(f"Error processing event: {e}") except requests.exceptions.RequestException as e: print(f"Request error: {e}") except Exception as e: print(f"General error: {e}") ``` **Summary** This is a complex project that requires a good understanding of Elixir, web servers, SSE, and LLMs. Start with the documentation extraction and SSE server, and then add MCP integration if needed. Remember to test your code thoroughly and handle errors gracefully. Good luck!