发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 26,265 个能力。

全部26,265
test-server

test-server

just a test of glama

Tiptap Collaboration MCP Server

Tiptap Collaboration MCP Server

Enables interaction with Tiptap collaborative document services through comprehensive document management, real-time statistics, markdown conversion, and batch operations. Supports creating, updating, searching, and managing collaborative documents with health monitoring and semantic search capabilities.

mcp-voice

mcp-voice

用于语音人工智能的 MCP 服务器,使用 OpenAI (Yòng yú yǔyīn réngōng zhìnéng de MCP fúwùqì, shǐyòng OpenAI)

Superface MCP Server

Superface MCP Server

通过模型上下文协议提供对各种人工智能工具的访问,允许 Claude Desktop 用户通过 API 集成和使用 Superface 的功能。

Cryptocurrency Market Data MCP Server

Cryptocurrency Market Data MCP Server

通过与主要交易所的集成,提供实时和历史加密货币市场数据。该服务器使像 Claude 这样的 LLM 能够获取当前价格、分析市场趋势并访问详细的交易信息。

MCP Mix Server

MCP Mix Server

A tutorial implementation MCP server that enables analysis of CSV and Parquet files. Allows users to summarize data and query file information through natural language interactions.

Brightsy MCP Server

Brightsy MCP Server

一个实现了模型上下文协议的服务器,用于将大型语言模型(LLM)连接到 Brightsy AI 代理,允许用户向这些代理传递消息并接收响应。

Lighthouse MCP

Lighthouse MCP

一个模型上下文协议服务器,使 Claude 能够通过安全认证与您的 Lighthouse.one 加密货币投资组合数据进行交互和分析。

VOICEPEAK MCP Server

VOICEPEAK MCP Server

Enables text-to-speech synthesis using VOICEPEAK software with support for custom narrators, emotions, and pronunciation dictionaries. Allows generating and playing audio files from text with configurable voice parameters.

Apollo Proxy MCP Server

Apollo Proxy MCP Server

Provides AI agents with access to a global residential proxy network covering over 190 countries for web fetching and scraping. It enables pay-per-request transactions using USDC on the Base network via the x402 protocol.

Tiny MCP Server (Rust)

Tiny MCP Server (Rust)

一个用 Rust 实现的机器通信协议 (MCP)。

Monad MCP Server

Monad MCP Server

Enables interaction with the Monad testnet to check balances, examine transaction details, get gas prices, and retrieve block information.

GitHub Projects MCP Server

GitHub Projects MCP Server

Provides an MCP server for scraping and retrieving data from GitHub Projects, including issues, pull requests, and organizational metadata. It enables natural language interaction with project boards and repository contents using GitHub Personal Access Tokens.

Scientific Computation MCP

Scientific Computation MCP

Enables mathematical and scientific computations including linear algebra operations (matrices, eigenvalues, decompositions), vector calculus (gradients, curl, divergence), and visualization of vector fields and functions.

docdex

docdex

Docdex is a lightweight, local documentation indexer/search daemon. It runs per-project, keeps an on-disk index of your markdown/text docs, and serves top-k snippets over HTTP or CLI for any coding assistant or tool—no external services or uploads required.

TFT MCP Server

TFT MCP Server

这个服务器使 Claude 能够访问云顶之弈 (TFT) 游戏数据,允许用户通过 Riot Games API 检索对战历史和详细的对战信息。

A Pokedex web app!

A Pokedex web app!

宝可梦图鉴 Web 应用 (Bǎokěmèng tújiàn Web yìngyòng) Alternatively, a more concise translation: 宝可梦图鉴网页应用 (Bǎokěmèng tújiàn wǎngyè yìngyòng) Both translations convey the meaning of a Pokedex web application. The first is slightly more general, while the second specifically mentions it's a web *page* application.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

镜子 (jìng zi)

azure-devops-mcp

azure-devops-mcp

Okay, I will provide a translation of "Azure DevOps MCP C# server and client code" into Chinese. However, it's important to understand the context. Here are a few possible translations, depending on what you want to emphasize: **Option 1 (Most Literal, Emphasizing the Technology):** Azure DevOps MCP C# 服务器和客户端代码 (Azure DevOps MCP C# fúwùqì hé kèhùduān dàimǎ) * **Azure DevOps:** Azure DevOps (no translation needed, it's a proper noun) * **MCP:** MCP (no translation needed, it's an acronym) * **C#:** C# (no translation needed, it's a programming language) * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **代码 (dàimǎ):** Code **Option 2 (More Descriptive, Emphasizing the Purpose):** 用于 Azure DevOps MCP 的 C# 服务器和客户端代码 (Yòng yú Azure DevOps MCP de C# fúwùqì hé kèhùduān dàimǎ) * **用于 (yòng yú):** Used for / For * **Azure DevOps:** Azure DevOps * **MCP:** MCP * **的 (de):** (Possessive particle, like "of" or "'s") * **C#:** C# * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **代码 (dàimǎ):** Code **Option 3 (If you want to imply development):** Azure DevOps MCP 的 C# 服务器和客户端开发代码 (Azure DevOps MCP de C# fúwùqì hé kèhùduān kāifā dàimǎ) * **Azure DevOps:** Azure DevOps * **MCP:** MCP * **的 (de):** (Possessive particle, like "of" or "'s") * **C#:** C# * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **开发代码 (kāifā dàimǎ):** Development Code **Which one should you use?** * **Option 1** is the most direct translation and suitable if the audience is already familiar with the terms. * **Option 2** is slightly more descriptive and clarifies that the code is *for* Azure DevOps MCP. * **Option 3** is best if you want to specifically emphasize that it's *development* code. **Important Considerations about "MCP":** * **What does MCP refer to in this context?** MCP can stand for many things (Microsoft Certified Professional, Master Control Program, etc.). If it's a specific component or feature *within* Azure DevOps, you might need to translate it differently or provide a more detailed explanation. Without knowing the specific meaning of "MCP" in this context, I'm assuming it's an acronym that doesn't need translation. If it *does* need translation, please provide the full name or a description of what it is. **In summary, I recommend using Option 1 or Option 2 unless you specifically want to emphasize development code (Option 3). Please clarify the meaning of "MCP" if it needs a more specific translation.**

FridayAI

FridayAI

帮助完成任务的AI游戏伙伴。

MCP ComfyUI Flux

MCP ComfyUI Flux

Enables AI image generation using FLUX models through ComfyUI with GPU acceleration, supporting image generation, 4x upscaling, and background removal with optimized Docker deployment.

MCP MySQL Server

MCP MySQL Server

Enables Claude Desktop to interact with MySQL databases through secure query execution, schema discovery, and multi-database support with configurable read/write permissions and built-in SQL injection protection.

DigitalOcean MCP Server

DigitalOcean MCP Server

Provides access to all 471+ DigitalOcean API endpoints through an MCP server that dynamically extracts them from the OpenAPI specification, enabling search, filtering, and direct API calls with proper authentication.

Grimoire

Grimoire

Provides D\&D 5e spell information through search and filtering tools. Access detailed spell data, class-specific spell lists, and spell school references powered by the D\&D 5e API.

Google Workspace MCP Server

Google Workspace MCP Server

一个模型上下文协议服务器,提供与 Gmail 和 Calendar API 交互的工具,从而实现对电子邮件和日历事件的程序化管理。

YouTube Transcript MCP Server

YouTube Transcript MCP Server

There are a few ways to approach building an MCP (Microservices Communication Protocol) server for fetching YouTube transcripts. Here's a breakdown of the concepts and potential implementations: **Understanding the Requirements** * **YouTube Data API:** You'll need to use the YouTube Data API to retrieve transcript information. This API requires authentication (API key or OAuth 2.0). * **Transcript Retrieval:** The API provides different ways to get transcripts: * **Automatic Transcripts (ASR):** Generated by YouTube's automatic speech recognition. These are often less accurate. * **Community Contributions:** Transcripts provided by the YouTube community. * **Official Transcripts:** Transcripts uploaded by the video creator. * **MCP (Microservices Communication Protocol):** This defines how your server will communicate with other microservices in your architecture. Common choices include: * **REST (HTTP):** Simple, widely understood. Good for basic operations. * **gRPC:** High-performance, uses Protocol Buffers for data serialization. Excellent for complex data structures and demanding performance. * **Message Queues (e.g., RabbitMQ, Kafka):** Asynchronous communication. Useful for decoupling services and handling large volumes of requests. * **Scalability and Reliability:** Consider how your server will handle a large number of requests and potential failures. * **Error Handling:** Implement robust error handling to gracefully deal with API errors, network issues, and invalid requests. **Implementation Options** Here are a few implementation options, focusing on different MCP approaches: **1. REST (HTTP) based MCP Server (Python with Flask/FastAPI)** * **Language:** Python (popular for API development) * **Framework:** Flask (simple) or FastAPI (modern, asynchronous) ```python # FastAPI example from fastapi import FastAPI, HTTPException from youtube_transcript_api import YouTubeTranscriptApi app = FastAPI() @app.get("/transcript/{video_id}") async def get_transcript(video_id: str, lang: str = 'en'): """ Fetches the transcript for a YouTube video. Args: video_id: The YouTube video ID. lang: The desired language of the transcript (default: 'en'). Returns: A list of transcript entries (text, start, duration). """ try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[lang]) return transcript except Exception as e: raise HTTPException(status_code=500, detail=str(e)) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` **Explanation:** * **`YouTubeTranscriptApi`:** This is a Python library that simplifies interacting with the YouTube transcript API. Install it with `pip install youtube-transcript-api`. * **`FastAPI`:** A modern, high-performance web framework for building APIs. * **`/transcript/{video_id}`:** An endpoint that accepts the YouTube video ID as a path parameter. * **`lang`:** An optional query parameter to specify the desired language. * **Error Handling:** The `try...except` block catches potential errors and returns an HTTP 500 error with a descriptive message. * **`uvicorn`:** An ASGI server to run the FastAPI application. **To use this:** 1. Install dependencies: `pip install fastapi uvicorn youtube-transcript-api` 2. Run the server: `python your_script_name.py` 3. Access the API: `http://localhost:8000/transcript/VIDEO_ID` (replace `VIDEO_ID` with the actual YouTube video ID). You can also specify the language: `http://localhost:8000/transcript/VIDEO_ID?lang=fr` **2. gRPC based MCP Server (Python with gRPC)** * **Language:** Python * **Framework:** gRPC **Steps:** 1. **Define the Protocol Buffer (.proto) file:** This defines the service and message structure. ```protobuf syntax = "proto3"; package youtube_transcript; service TranscriptService { rpc GetTranscript (TranscriptRequest) returns (TranscriptResponse) {} } message TranscriptRequest { string video_id = 1; string language = 2; } message TranscriptResponse { repeated TranscriptEntry entries = 1; } message TranscriptEntry { string text = 1; double start = 2; double duration = 3; } ``` 2. **Generate gRPC code:** Use the `grpc_tools.protoc` compiler to generate Python code from the `.proto` file. ```bash python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto ``` 3. **Implement the gRPC server:** ```python # youtube_transcript_server.py import grpc from concurrent import futures from youtube_transcript_api import YouTubeTranscriptApi import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc class TranscriptServicer(youtube_transcript_pb2_grpc.TranscriptServiceServicer): def GetTranscript(self, request, context): try: transcript = YouTubeTranscriptApi.get_transcript(request.video_id, languages=[request.language]) entries = [] for entry in transcript: entries.append(youtube_transcript_pb2.TranscriptEntry( text=entry['text'], start=entry['start'], duration=entry['duration'] )) return youtube_transcript_pb2.TranscriptResponse(entries=entries) except Exception as e: context.abort(grpc.StatusCode.INTERNAL, str(e)) def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) youtube_transcript_pb2_grpc.add_TranscriptServiceServicer_to_server(TranscriptServicer(), server) server.add_insecure_port('[::]:50051') server.start() server.wait_for_termination() if __name__ == '__main__': serve() ``` 4. **Implement the gRPC client (example):** ```python # youtube_transcript_client.py import grpc import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc def get_transcript(video_id, language): with grpc.insecure_channel('localhost:50051') as channel: stub = youtube_transcript_pb2_grpc.TranscriptServiceStub(channel) request = youtube_transcript_pb2.TranscriptRequest(video_id=video_id, language=language) try: response = stub.GetTranscript(request) for entry in response.entries: print(f"[{entry.start:.2f} - {entry.start + entry.duration:.2f}] {entry.text}") except grpc.RpcError as e: print(f"Error: {e.details()}") if __name__ == '__main__': get_transcript("VIDEO_ID", "en") # Replace with a real video ID ``` **Explanation:** * **`.proto` file:** Defines the service (`TranscriptService`) and the messages (`TranscriptRequest`, `TranscriptResponse`, `TranscriptEntry`). * **`grpc_tools.protoc`:** Compiles the `.proto` file into Python code. * **`TranscriptServicer`:** Implements the `GetTranscript` method, which retrieves the transcript using `YouTubeTranscriptApi` and converts it into the gRPC response format. * **gRPC Client:** Connects to the server, sends a `TranscriptRequest`, and prints the received transcript entries. **To use this:** 1. Install dependencies: `pip install grpcio grpcio-tools protobuf youtube-transcript-api` 2. Compile the `.proto` file: `python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto` 3. Run the server: `python youtube_transcript_server.py` 4. Run the client: `python youtube_transcript_client.py` **3. Message Queue based MCP Server (Python with RabbitMQ/Kafka)** * **Language:** Python * **Message Queue:** RabbitMQ or Kafka **Conceptual Outline (RabbitMQ Example):** 1. **Producer (Client):** Sends a message to the queue with the video ID and language. 2. **Consumer (Server):** Listens to the queue, receives the message, fetches the transcript, and potentially publishes the transcript to another queue or stores it in a database. **RabbitMQ Example (Simplified):** * **Producer (Client):** ```python # producer.py import pika import json connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') message = {'video_id': 'VIDEO_ID', 'language': 'en'} # Replace with a real video ID channel.basic_publish(exchange='', routing_key='transcript_requests', body=json.dumps(message)) print(" [x] Sent %r" % message) connection.close() ``` * **Consumer (Server):** ```python # consumer.py import pika import json from youtube_transcript_api import YouTubeTranscriptApi connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') def callback(ch, method, properties, body): message = json.loads(body.decode('utf-8')) video_id = message['video_id'] language = message['language'] print(f" [x] Received request for video ID: {video_id}, language: {language}") try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[language]) # Process the transcript (e.g., store in a database, publish to another queue) print(f" [x] Transcript fetched successfully for {video_id}") # Example: Print the first few lines for i in range(min(5, len(transcript))): print(transcript[i]) except Exception as e: print(f" [x] Error fetching transcript: {e}") channel.basic_consume(queue='transcript_requests', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() ``` **Explanation:** * **RabbitMQ:** A message broker that allows asynchronous communication between services. * **`transcript_requests` queue:** The queue where the client sends requests for transcripts. * **Producer:** Sends a JSON message containing the video ID and language to the queue. * **Consumer:** Listens to the queue, retrieves the message, fetches the transcript using `YouTubeTranscriptApi`, and processes the transcript. * **`auto_ack=True`:** Automatically acknowledges the message after it's processed. Consider using manual acknowledgements for more robust error handling. **To use this:** 1. Install RabbitMQ: Follow the instructions on the RabbitMQ website. 2. Install dependencies: `pip install pika youtube-transcript-api` 3. Run the consumer: `python consumer.py` 4. Run the producer: `python producer.py` **Key Considerations and Best Practices** * **API Key Management:** Store your YouTube Data API key securely (e.g., environment variables, secrets management). Never hardcode it in your code. * **Rate Limiting:** The YouTube Data API has rate limits. Implement retry logic with exponential backoff to handle rate limit errors gracefully. Consider caching transcripts to reduce API calls. * **Error Handling:** Implement comprehensive error handling to catch API errors, network issues, and invalid requests. Log errors for debugging. * **Asynchronous Operations:** For gRPC and message queue implementations, use asynchronous operations (e.g., `asyncio` in Python) to improve performance and scalability. * **Data Validation:** Validate the input (video ID, language) to prevent errors and security vulnerabilities. * **Logging:** Use a logging library (e.g., `logging` in Python) to log important events and errors. * **Monitoring:** Monitor the performance of your server (e.g., request latency, error rates) to identify and address issues. * **Security:** If your server handles sensitive data, implement appropriate security measures (e.g., authentication, authorization, encryption). * **Scalability:** Design your server to be scalable to handle a large number of requests. Consider using a load balancer and multiple instances of your server. * **Deployment:** Choose a suitable deployment environment (e.g., cloud platform, containerization with Docker). * **Caching:** Implement caching mechanisms (e.g., Redis, Memcached) to store frequently accessed transcripts and reduce the load on the YouTube Data API. Consider using a cache invalidation strategy. * **Transcript Availability:** Not all YouTube videos have transcripts available. Handle cases where a transcript is not found. * **Language Support:** The `YouTubeTranscriptApi` library supports multiple languages. Allow users to specify the desired language. * **Transcript Types:** Consider supporting different types of transcripts (automatic, community, official). The `YouTubeTranscriptApi` library provides methods to access different transcript types. **Choosing the Right Approach** * **REST (HTTP):** Good for simple use cases and when you need a widely accessible API. Easy to implement and debug. * **gRPC:** Best for high-performance communication between microservices. Requires more setup but offers significant performance benefits. * **Message Queue:** Ideal for asynchronous processing and decoupling services. Useful for handling large volumes of requests and ensuring that requests are processed even if one service is temporarily unavailable. The best approach depends on your specific requirements and the overall architecture of your microservices. Start with REST if you're unsure, and then consider gRPC or message queues if you need better performance or scalability. Remember to prioritize security, error handling, and rate limiting in all implementations.

Sola MCP Server

Sola MCP Server

A stateless HTTP server implementing the Model Context Protocol (MCP) that enables applications to interact with Social Layer platform data including events, groups, profiles, and venues via standardized endpoints.

DALL-E MCP Server

DALL-E MCP Server

An MCP (Model Context Protocol) server that allows generating, editing, and creating variations of images using OpenAI's DALL-E APIs.

Documentation MCP Server with Python SDK

Documentation MCP Server with Python SDK

Poetry MCP Server

Poetry MCP Server

Manages poetry catalogs with state-based tracking, thematic connections via nexuses, quality scoring across 8 dimensions, and submission tracking to literary venues, treating poems as artifacts with metadata stored in markdown frontmatter.