发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 14,296 个能力。

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?
测试 MCP 服务器 (Cèshì MCP fúwùqì)
McpDocs
Okay, this is a complex task involving several moving parts. Here's a breakdown of how you can provide Elixir project documentation (including dependencies) to an LLM via an SSE (Server-Sent Events) MCP (Message Channel Protocol) server. I'll outline the steps, tools, and considerations. **Conceptual Overview** 1. **Documentation Extraction:** You need to extract the documentation from your Elixir project and its dependencies. This involves parsing the Elixir code and extracting the `@doc` attributes, type specifications, and module/function signatures. 2. **Data Formatting:** The extracted documentation needs to be formatted into a structured format suitable for an LLM. JSON is a common choice. Consider including metadata like module name, function name, arity, and the actual documentation string. 3. **SSE Server:** You'll need an Elixir-based SSE server that can stream the formatted documentation to the LLM. This server will listen for a connection from the LLM and then push the documentation data as SSE events. 4. **MCP Integration (if needed):** If you need to use MCP, you'll need to integrate an MCP library into your Elixir SSE server. MCP provides a standardized way for clients (like your LLM) to discover and connect to services. 5. **LLM Integration:** The LLM needs to be configured to connect to the SSE server (and potentially use MCP to discover it) and consume the SSE events. The LLM will then process the documentation data to learn about your project. **Detailed Steps and Code Examples** **1. Documentation Extraction** * **Using `ExDoc` (Recommended):** `ExDoc` is the standard documentation generator for Elixir projects. It can generate HTML documentation, but more importantly, it provides an API for programmatically accessing the documentation data. * **Add `ExDoc` to your `mix.exs`:** ```elixir def deps do [ {:ex_doc, "~> 0.31", only: :dev, runtime: false} ] end ``` * **Use `ExDoc.Markdown.parse/1` and `ExDoc.Type.to_string/1`:** You can use `ExDoc`'s internal functions to parse the documentation strings and type specifications. This gives you a structured representation of the documentation. * **Example (Conceptual):** ```elixir defmodule DocExtractor do require Logger def extract_docs(project_path) do # This is a simplified example. You'll need to adapt it to your project structure. Mix.Task.run("compile", ["--warnings-as-errors"]) # Ensure code is compiled Mix.Task.run("docs", []) # Generate docs (necessary for ExDoc to work) Enum.map(Mix.Project.config()[:modules], fn module -> try do module |> Module.concat(".ex") |> Code.require_file() module |> Module.concat(".ex") |> File.read!() |> extract_module_docs(module) rescue e -> Logger.error("Error extracting docs for #{module}: #{inspect(e)}") [] end end) |> List.flatten() end defp extract_module_docs(file_content, module) do # This is a very basic example. You'll need to use a proper Elixir parser # (like `Code.string_to_quoted/1` and then traverse the AST) to reliably # extract the @doc attributes and function definitions. This is a complex task. # The following is a placeholder. # Example using Regex (fragile, but illustrative): Regex.scan(~r/@doc """(.*?)"""\s+def\s+(.*?)\(/ms, file_content) |> Enum.map(fn [doc, function_name] -> %{ module: module, function: function_name, doc: String.trim(doc) } end) end end # Example usage: # docs = DocExtractor.extract_docs(".") # "." is the current project directory # IO.inspect(docs) ``` * **Important Considerations for `ExDoc`:** * `ExDoc` relies on the code being compiled and the documentation being generated. Make sure you run `mix compile` and `mix docs` before attempting to extract the documentation programmatically. * Directly accessing `ExDoc`'s internal data structures can be fragile, as the internal implementation might change between versions. Consider using the HTML output and parsing it if a more stable API is needed. * Handling dependencies: You'll need to iterate through your project's dependencies and extract documentation from them as well. This might involve finding the dependency's source code and running `ExDoc` on it. * **Alternative: AST Parsing (Advanced):** You can use Elixir's `Code.string_to_quoted/1` function to parse the Elixir code into an Abstract Syntax Tree (AST). You can then traverse the AST to find `@doc` attributes, function definitions, and type specifications. This approach is more robust but also more complex. Libraries like `Macro` can help with AST manipulation. **2. Data Formatting (JSON)** * **Example JSON Structure:** ```json [ { "module": "MyModule", "function": "my_function", "arity": 1, "doc": "This function does something.", "spec": "my_function(integer) :: string" }, { "module": "MyModule", "function": "another_function", "arity": 2, "doc": "This function does something else.", "spec": "another_function(string, boolean) :: atom" } ] ``` * **Elixir Code to Generate JSON:** ```elixir defmodule JsonFormatter do def format_docs(docs) do docs |> Enum.map(fn doc -> %{ module: doc.module, function: doc.function, arity: doc.arity || 0, # Add arity if available doc: doc.doc, spec: doc.spec || "" # Add spec if available } end) |> Jason.encode! # Use Jason or Poison for JSON encoding end end ``` **3. SSE Server** * **Using `Plug` and `Cowboy`:** `Plug` is a specification for building web applications in Elixir, and `Cowboy` is a popular web server that implements the `Plug` specification. * **Add dependencies to `mix.exs`:** ```elixir def deps do [ {:plug, "~> 1.14"}, {:cowboy, "~> 2.10"}, {:jason, "~> 1.4"} # For JSON encoding ] end ``` * **Create a Plug module:** ```elixir defmodule DocServer do use Plug.Router require Logger def init(_opts) do :ok end def call(conn, _opts) do dispatch(conn) end plug Plug.Logger plug :match plug :dispatch get "/docs" do conn = conn |> Plug.Conn.put_resp_content_type("text/event-stream") |> Plug.Conn.put_resp_header("cache-control", "no-cache") |> Plug.Conn.send_resp(200, stream_docs()) end match _ do send_resp(conn, 404, "Not Found") end defp stream_docs() do # Replace this with your actual documentation extraction and formatting logic docs = DocExtractor.extract_docs(".") # Extract docs from the project json_string = JsonFormatter.format_docs(docs) # Format as JSON # Split the JSON string into smaller chunks for streaming chunks = String.split(json_string, "") |> Enum.chunk_every(500) Enum.reduce(chunks, "", fn chunk, acc -> event_data = "data: " <> Enum.join(chunk, "") <> "\n\n" acc <> event_data end) end end ``` * **Start the server in your `application.ex`:** ```elixir def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: DocServer, options: [port: 4000]} ] Supervisor.start_link(children, strategy: :one_for_one) end ``` * **Explanation:** * The `/docs` endpoint serves the SSE stream. * `Plug.Conn.put_resp_content_type("text/event-stream")` sets the correct content type for SSE. * `Plug.Conn.put_resp_header("cache-control", "no-cache")` disables caching. * `stream_docs()` is where you'll extract, format, and stream the documentation. The example shows how to split the JSON string into chunks to avoid sending very large events. * Each SSE event is formatted as `data: <your_data>\n\n`. **4. MCP Integration (Optional)** * **Choose an MCP Library:** There are several MCP libraries available for Elixir. Research and choose one that suits your needs. Some options might include libraries that wrap existing MCP implementations. * **Integrate the Library:** Follow the library's documentation to integrate it into your SSE server. This will typically involve: * Adding the library as a dependency in `mix.exs`. * Configuring the MCP server address and other settings. * Registering your SSE service with the MCP server. This allows the LLM to discover your service. * **Example (Conceptual - using a hypothetical MCP library):** ```elixir defmodule DocServer do use Plug.Router require Logger @mcp_server "mcp.example.com:8080" # Replace with your MCP server address def init(_opts) do # Register the service with the MCP server :ok = MCP.register_service(@mcp_server, "elixir-doc-server", "/docs") :ok end # ... (rest of the DocServer code) ... end ``` **5. LLM Integration** * **LLM Configuration:** Configure your LLM to connect to the SSE server. This will typically involve: * Providing the URL of the SSE endpoint (e.g., `http://localhost:4000/docs`). * If using MCP, configuring the LLM to use the MCP server to discover the service. * **SSE Event Handling:** The LLM needs to be able to parse the SSE events and extract the JSON data. Most LLM frameworks have libraries or built-in support for handling SSE streams. * **Data Processing:** The LLM will then process the JSON data to learn about your Elixir project's documentation. This might involve: * Indexing the documentation for efficient retrieval. * Using the documentation to answer questions about the project. * Using the documentation to generate code or documentation. **Important Considerations and Best Practices** * **Error Handling:** Implement robust error handling throughout the process. Log errors to help with debugging. * **Security:** If you're exposing the SSE server to the internet, consider security measures such as authentication and authorization. * **Scalability:** If you need to handle a large number of LLM connections, consider using a more scalable web server than `Cowboy` (e.g., `Phoenix` with `Cowboy2`). * **Chunking:** Sending large SSE events can cause performance problems. Chunk the data into smaller events. * **Heartbeats:** Implement a heartbeat mechanism to ensure that the connection between the LLM and the SSE server is still alive. The server can send a periodic "ping" event, and the LLM can respond with a "pong" event. * **Dependencies:** Carefully manage your project's dependencies. Use a dependency management tool like `mix` to ensure that you're using compatible versions of all libraries. * **Testing:** Write unit tests and integration tests to ensure that your code is working correctly. * **Rate Limiting:** Implement rate limiting on the SSE server to prevent the LLM from overwhelming the server with requests. * **Documentation Updates:** Consider how you'll handle documentation updates. You might need to implement a mechanism to notify the LLM when the documentation has changed. **Example LLM Integration (Conceptual - Python)** ```python import sseclient import requests import json url = 'http://localhost:4000/docs' # Replace with your SSE server URL try: response = requests.get(url, stream=True) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) client = sseclient.SSEClient(response) for event in client.events(): try: data = json.loads(event.data) print(f"Received data: {data}") # Process the documentation data here (e.g., index it, use it for QA) except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}, data: {event.data}") except Exception as e: print(f"Error processing event: {e}") except requests.exceptions.RequestException as e: print(f"Request error: {e}") except Exception as e: print(f"General error: {e}") ``` **Summary** This is a complex project that requires a good understanding of Elixir, web servers, SSE, and LLMs. Start with the documentation extraction and SSE server, and then add MCP integration if needed. Remember to test your code thoroughly and handle errors gracefully. Good luck!
mpc-csharp-semantickernel
Okay, here's an example demonstrating how to use Microsoft Semantic Kernel with OpenAI and a hypothetical "MCP Server" (assuming MCP stands for something like "My Custom Processing Server" or "Message Control Protocol Server"). Since "MCP Server" is vague, I'll make some assumptions about its functionality and how it might interact with Semantic Kernel. You'll need to adapt this to your specific MCP Server's capabilities. **Conceptual Overview** The core idea is to use Semantic Kernel to orchestrate interactions between OpenAI (for language understanding and generation) and your MCP Server (for specialized processing, data retrieval, or control actions). **Assumptions about the MCP Server** * **API Endpoint:** It exposes an API endpoint (e.g., REST API) for receiving requests and sending responses. * **Functionality:** Let's assume it can perform a specific task, like: * **Data Lookup:** Retrieve information from a database based on a query. * **System Control:** Execute a command on a system. * **Message Routing:** Route a message to a specific destination. * **Input/Output:** It expects structured input (e.g., JSON) and returns structured output (e.g., JSON). **Example Scenario: Smart Home Control** Let's imagine an MCP Server that controls smart home devices. We want to use Semantic Kernel and OpenAI to allow users to control their home with natural language. **Code Example (C#)** ```csharp using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Connectors.OpenAI; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class SmartHomePlugin { private readonly HttpClient _httpClient; private readonly string _mcpServerEndpoint; public SmartHomePlugin(string mcpServerEndpoint) { _httpClient = new HttpClient(); _mcpServerEndpoint = mcpServerEndpoint; } [KernelFunction, Description("Controls a smart home device.")] public async Task<string> ControlDevice( [Description("The device to control (e.g., lights, thermostat).")] string device, [Description("The action to perform (e.g., turn on, turn off, set temperature).")] string action, [Description("The value to set (e.g., 22 for temperature).")] string value = "" ) { // 1. Prepare the request to the MCP Server var requestData = new { device = device, action = action, value = value }; string jsonRequest = JsonSerializer.Serialize(requestData); var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json"); // 2. Send the request to the MCP Server HttpResponseMessage response = await _httpClient.PostAsync(_mcpServerEndpoint, content); // 3. Handle the response from the MCP Server if (response.IsSuccessStatusCode) { string jsonResponse = await response.Content.ReadAsStringAsync(); // Deserialize the JSON response (assuming MCP Server returns JSON) try { var responseObject = JsonSerializer.Deserialize<Dictionary<string, string>>(jsonResponse); return responseObject?["status"] ?? "Unknown status"; // Assuming MCP returns a "status" field } catch (JsonException ex) { Console.WriteLine($"Error deserializing MCP Server response: {ex.Message}"); return "Error processing MCP Server response."; } } else { Console.WriteLine($"MCP Server request failed: {response.StatusCode}"); return $"MCP Server request failed with status code: {response.StatusCode}"; } } } public class Example { public static async Task Main() { // 1. Configure Semantic Kernel string apiKey = "YOUR_OPENAI_API_KEY"; string orgId = "YOUR_OPENAI_ORG_ID"; // Optional Kernel kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-3.5-turbo", apiKey, orgId) // Or "gpt-4" .Build(); // 2. Define the MCP Server endpoint string mcpServerEndpoint = "http://your-mcp-server.com/api/control"; // Replace with your actual endpoint // 3. Import the SmartHomePlugin var smartHomePlugin = new SmartHomePlugin(mcpServerEndpoint); kernel.ImportPluginFromObject(smartHomePlugin, "SmartHome"); // 4. Create a Semantic Function (Prompt) string prompt = @" Control the smart home device. Device: {{$device}} Action: {{$action}} Value: {{$value}} {{SmartHome.ControlDevice $device $action $value}} "; var smartHomeFunction = kernel.CreateFunction(prompt); // 5. Run the Semantic Function with user input var arguments = new KernelArguments { ["device"] = "lights", ["action"] = "turn on", ["value"] = "" }; var result = await smartHomeFunction.InvokeAsync(kernel, arguments); Console.WriteLine($"Result: {result.GetValue<string>()}"); // Example 2: More natural language input using OpenAI to extract parameters string naturalLanguagePrompt = "Turn on the living room lights."; // Define a prompt to extract device, action, and value from the natural language input string extractionPrompt = @" Extract the device, action, and value from the following text: Text: {{$text}} Device: Action: Value: "; var extractionFunction = kernel.CreateFunction(extractionPrompt); var extractionResult = await extractionFunction.InvokeAsync(kernel, new KernelArguments { ["text"] = naturalLanguagePrompt }); string extractedText = extractionResult.GetValue<string>()!; // Parse the extracted text (this is a simplified example; you might need more robust parsing) string extractedDevice = extractedText.Split("Device:")[1].Split("Action:")[0].Trim(); string extractedAction = extractedText.Split("Action:")[1].Split("Value:")[0].Trim(); string extractedValue = extractedText.Split("Value:")[1].Trim(); Console.WriteLine($"Extracted Device: {extractedDevice}"); Console.WriteLine($"Extracted Action: {extractedAction}"); Console.WriteLine($"Extracted Value: {extractedValue}"); // Now use the extracted parameters with the SmartHome.ControlDevice function var controlArguments = new KernelArguments { ["device"] = extractedDevice, ["action"] = extractedAction, ["value"] = extractedValue }; var controlResult = await smartHomeFunction.InvokeAsync(kernel, controlArguments); Console.WriteLine($"Control Result: {controlResult.GetValue<string>()}"); } } ``` **Explanation:** 1. **`SmartHomePlugin`:** * This class represents a Semantic Kernel plugin that interacts with the MCP Server. * It takes the MCP Server endpoint as a constructor parameter. * The `ControlDevice` function is decorated with `[KernelFunction]` to make it available to Semantic Kernel. * It constructs a JSON request based on the input parameters (`device`, `action`, `value`). * It sends a POST request to the MCP Server. * It handles the response from the MCP Server, deserializing the JSON and returning a status message. Error handling is included. 2. **`Example.Main`:** * **Configure Semantic Kernel:** Sets up the Semantic Kernel with your OpenAI API key and organization ID. * **Define MCP Server Endpoint:** Replace `"http://your-mcp-server.com/api/control"` with the actual URL of your MCP Server's API endpoint. * **Import Plugin:** Creates an instance of the `SmartHomePlugin` and imports it into the Semantic Kernel. This makes the `ControlDevice` function available for use in prompts. * **Create Semantic Function (Prompt):** Defines a prompt that uses the `SmartHome.ControlDevice` function. The prompt takes `device`, `action`, and `value` as input parameters. * **Run Semantic Function:** Creates a `KernelArguments` object with the desired device, action, and value, and then invokes the semantic function. The result from the MCP Server is printed to the console. * **Natural Language Example:** Demonstrates how to use OpenAI to extract the device, action, and value from a natural language prompt. This allows users to control their smart home with more natural commands. A separate prompt is used for extraction. The extracted parameters are then used to call the `SmartHome.ControlDevice` function. **Key Points and Considerations:** * **MCP Server API:** The most important part is understanding the API of your MCP Server. You need to know the endpoint, the expected request format (JSON schema), and the format of the response. * **Error Handling:** The example includes basic error handling for network requests and JSON deserialization. You should add more robust error handling for production code. * **Security:** If your MCP Server requires authentication, you'll need to add authentication headers to the `HttpClient` requests. Never hardcode sensitive information like API keys directly in your code. Use environment variables or a secure configuration mechanism. * **Prompt Engineering:** The prompts are crucial for getting the desired behavior. Experiment with different prompts to improve the accuracy and reliability of the system. Consider using techniques like few-shot learning to provide examples to the language model. * **JSON Serialization/Deserialization:** The example uses `System.Text.Json`. You can use other JSON libraries like Newtonsoft.Json if you prefer. * **Dependency Injection:** For larger applications, consider using dependency injection to manage the `HttpClient` and other dependencies. * **Asynchronous Operations:** The example uses `async` and `await` for asynchronous operations. This is important for avoiding blocking the main thread and improving performance. * **Parameter Extraction:** The natural language example uses a simple string splitting approach to extract parameters. For more complex scenarios, you might need to use more sophisticated techniques like regular expressions or a dedicated natural language processing library. Semantic Kernel also offers more advanced techniques for parameter extraction. * **Semantic Kernel Plugins:** Consider breaking down your MCP Server functionality into multiple Semantic Kernel plugins for better organization and reusability. * **Testing:** Write unit tests to verify the functionality of your Semantic Kernel plugins and the interactions with the MCP Server. **How to Adapt This Example:** 1. **Replace Placeholders:** Replace `"YOUR_OPENAI_API_KEY"`, `"YOUR_OPENAI_ORG_ID"`, and `"http://your-mcp-server.com/api/control"` with your actual values. 2. **Implement MCP Server Interaction:** Modify the `SmartHomePlugin` to match the API of your MCP Server. Adjust the request format, response handling, and error handling accordingly. 3. **Customize Prompts:** Adjust the prompts to match the specific tasks you want to perform. 4. **Add Error Handling:** Implement more robust error handling to handle potential issues with the MCP Server or the OpenAI API. 5. **Add Security:** Implement appropriate security measures to protect your API keys and other sensitive information. **Chinese Translation of Key Concepts:** * **Microsoft Semantic Kernel:** 微软语义内核 (Wēiruǎn yǔyì kènèi) * **OpenAI:** 开放人工智能 (Kāifàng réngōng zhìnéng) * **MCP Server:** (You'll need to translate this based on what MCP stands for in your context. For example, if it's "My Custom Processing Server," you could translate it as: 我的自定义处理服务器 (Wǒ de zì dìngyì chǔlǐ fúwùqì)) * **Plugin:** 插件 (Chājiàn) * **Kernel Function:** 内核函数 (Nèihé hánshù) * **Prompt:** 提示 (Tíshì) * **Semantic Function:** 语义函数 (Yǔyì hánshù) * **API Endpoint:** 应用程序接口端点 (Yìngyòng chéngxù jiēkǒu duāndiǎn) * **Natural Language:** 自然语言 (Zìrán yǔyán) This comprehensive example should give you a solid foundation for using Microsoft Semantic Kernel with OpenAI and your MCP Server. Remember to adapt the code to your specific needs and to thoroughly test your implementation. Good luck!
Servidor MCP para CRM con IA
pumpfun-mcp
镜子 (jìng zi)
create-mcp-server
构建具有集成 Web 功能的强大模型上下文协议 (MCP) 服务器的综合架构
MCP Router
MCP Router 让你能够轻松管理你的 MCP (模型上下文协议) 服务器。
🐋 Docker MCP server
镜子 (jìng zi)

Googleworkspace Mcp
MCP MIDI Server
一个 MCP 服务器,用于将 MIDI 音序发送到任何支持 MIDI 输入的程序。
Mcp Servers Wiki Website
Moodle MCP Server
镜子 (jìng zi)
Cloudflare AI
my-mcp-server
Apache Doris MCP Server
Apache Doris 和 VeloDB 的 MCP 服务器
MCP Server Pool
MCP 服务合集 (MCP fúwù héjí)
Creating an MCP Server in Go and Serving it with Docker
ChatGPT MCP Server
镜子 (jìng zi)
untapped-mcp
一个未被使用的 MCP 服务器,用于与 Claude 配合使用。
MetaMCP MCP Server
镜子 (jìng zi)
MCP Server для Prom.ua
MCP 服务器,用于与 Prom.ua API 交互
mcptesting
通过 MCP 服务器创建的测试仓库
Payman AI Documentation MCP Server
镜子 (jìng zi)
comment-stripper-mcp
一个灵活的 MCP 服务器,可以批量处理代码文件,以删除多种编程语言中的注释。目前支持 JavaScript、TypeScript 和 Vue 文件,并使用基于正则表达式的模式匹配。可以处理单个文件、目录(包括子目录)和文本输入。专为代码的清洁维护和准备而构建。

Linear
Remote MCP Server on Cloudflare
🤖 Agenite
🤖 使用 TypeScript 构建强大的 AI 代理。Agenite 可以轻松创建、组合和控制 AI 代理,并提供对工具、流式传输和多代理架构的一流支持。在 OpenAI、Anthropic、AWS Bedrock 和 Ollama 等提供商之间无缝切换。
mcp-server-fetch-typescript MCP Server
镜子 (jìng zi)
Mcp Todo App
一个结合了 MCP 服务器和客户端,并为简单的待办事项应用添加了一点 AI 功能的组合。
Firebase Docs MCP Server Setup
这是一个示例,展示了如何将 Firebase 文档用作 MCP 服务器(包括索引文档)。