发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 16,320 个能力。

全部16,320
test-server

test-server

just a test of glama

Universal SQL MCP Server

Universal SQL MCP Server

Enables secure interaction with multiple SQL database engines (MySQL, PostgreSQL, SQLite, SQL Server) through a standardized interface. Supports schema inspection, safe query execution, and controlled write operations with built-in security restrictions.

Tiptap Collaboration MCP Server

Tiptap Collaboration MCP Server

Enables interaction with Tiptap collaborative document services through comprehensive document management, real-time statistics, markdown conversion, and batch operations. Supports creating, updating, searching, and managing collaborative documents with health monitoring and semantic search capabilities.

Windows MCP

Windows MCP

A lightweight open-source server that enables AI agents to interact with the Windows operating system, allowing for file navigation, application control, UI interaction, and QA testing without requiring computer vision.

mcp-voice

mcp-voice

用于语音人工智能的 MCP 服务器,使用 OpenAI (Yòng yú yǔyīn réngōng zhìnéng de MCP fúwùqì, shǐyòng OpenAI)

🚀 Electron Debug MCP Server

🚀 Electron Debug MCP Server

🚀 一个强大的 MCP 服务器,用于调试 Electron 应用程序,并深度集成了 Chrome DevTools 协议。 通过标准化的 API 控制、监控和调试 Electron 应用程序。

Superface MCP Server

Superface MCP Server

通过模型上下文协议提供对各种人工智能工具的访问,允许 Claude Desktop 用户通过 API 集成和使用 Superface 的功能。

Brave Search MCP Server

Brave Search MCP Server

Enables web searching and local business discovery through the Brave Search API. Provides both general web search with pagination and filtering controls, plus local business search with automatic fallback to web results.

MCP Servers - OpenAI and Flux Integration

MCP Servers - OpenAI and Flux Integration

azure-devops-mcp

azure-devops-mcp

Okay, I will provide a translation of "Azure DevOps MCP C# server and client code" into Chinese. However, it's important to understand the context. Here are a few possible translations, depending on what you want to emphasize: **Option 1 (Most Literal, Emphasizing the Technology):** Azure DevOps MCP C# 服务器和客户端代码 (Azure DevOps MCP C# fúwùqì hé kèhùduān dàimǎ) * **Azure DevOps:** Azure DevOps (no translation needed, it's a proper noun) * **MCP:** MCP (no translation needed, it's an acronym) * **C#:** C# (no translation needed, it's a programming language) * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **代码 (dàimǎ):** Code **Option 2 (More Descriptive, Emphasizing the Purpose):** 用于 Azure DevOps MCP 的 C# 服务器和客户端代码 (Yòng yú Azure DevOps MCP de C# fúwùqì hé kèhùduān dàimǎ) * **用于 (yòng yú):** Used for / For * **Azure DevOps:** Azure DevOps * **MCP:** MCP * **的 (de):** (Possessive particle, like "of" or "'s") * **C#:** C# * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **代码 (dàimǎ):** Code **Option 3 (If you want to imply development):** Azure DevOps MCP 的 C# 服务器和客户端开发代码 (Azure DevOps MCP de C# fúwùqì hé kèhùduān kāifā dàimǎ) * **Azure DevOps:** Azure DevOps * **MCP:** MCP * **的 (de):** (Possessive particle, like "of" or "'s") * **C#:** C# * **服务器 (fúwùqì):** Server * **和 (hé):** and * **客户端 (kèhùduān):** Client * **开发代码 (kāifā dàimǎ):** Development Code **Which one should you use?** * **Option 1** is the most direct translation and suitable if the audience is already familiar with the terms. * **Option 2** is slightly more descriptive and clarifies that the code is *for* Azure DevOps MCP. * **Option 3** is best if you want to specifically emphasize that it's *development* code. **Important Considerations about "MCP":** * **What does MCP refer to in this context?** MCP can stand for many things (Microsoft Certified Professional, Master Control Program, etc.). If it's a specific component or feature *within* Azure DevOps, you might need to translate it differently or provide a more detailed explanation. Without knowing the specific meaning of "MCP" in this context, I'm assuming it's an acronym that doesn't need translation. If it *does* need translation, please provide the full name or a description of what it is. **In summary, I recommend using Option 1 or Option 2 unless you specifically want to emphasize development code (Option 3). Please clarify the meaning of "MCP" if it needs a more specific translation.**

DigitalOcean MCP Server

DigitalOcean MCP Server

Provides access to all 471+ DigitalOcean API endpoints through an MCP server that dynamically extracts them from the OpenAPI specification, enabling search, filtering, and direct API calls with proper authentication.

Cryptocurrency Market Data MCP Server

Cryptocurrency Market Data MCP Server

通过与主要交易所的集成,提供实时和历史加密货币市场数据。该服务器使像 Claude 这样的 LLM 能够获取当前价格、分析市场趋势并访问详细的交易信息。

Grimoire

Grimoire

Provides D\&D 5e spell information through search and filtering tools. Access detailed spell data, class-specific spell lists, and spell school references powered by the D\&D 5e API.

DeepClaude MCP Server

DeepClaude MCP Server

这个服务器集成了 DeepSeek 和 Claude AI 模型,以提供增强的 AI 响应,具有 RESTful API、可配置的参数和强大的错误处理功能。

Google Workspace MCP Server

Google Workspace MCP Server

一个模型上下文协议服务器,提供与 Gmail 和 Calendar API 交互的工具,从而实现对电子邮件和日历事件的程序化管理。

Todo List MCP Server

Todo List MCP Server

A TypeScript-based MCP server that enables users to manage tasks through natural conversation with Claude. Features complete CRUD operations, priority management, tagging, search functionality, and intelligent productivity insights with robust Zod validation.

MCP Mix Server

MCP Mix Server

A tutorial implementation MCP server that enables analysis of CSV and Parquet files. Allows users to summarize data and query file information through natural language interactions.

YouTube Transcript MCP Server

YouTube Transcript MCP Server

There are a few ways to approach building an MCP (Microservices Communication Protocol) server for fetching YouTube transcripts. Here's a breakdown of the concepts and potential implementations: **Understanding the Requirements** * **YouTube Data API:** You'll need to use the YouTube Data API to retrieve transcript information. This API requires authentication (API key or OAuth 2.0). * **Transcript Retrieval:** The API provides different ways to get transcripts: * **Automatic Transcripts (ASR):** Generated by YouTube's automatic speech recognition. These are often less accurate. * **Community Contributions:** Transcripts provided by the YouTube community. * **Official Transcripts:** Transcripts uploaded by the video creator. * **MCP (Microservices Communication Protocol):** This defines how your server will communicate with other microservices in your architecture. Common choices include: * **REST (HTTP):** Simple, widely understood. Good for basic operations. * **gRPC:** High-performance, uses Protocol Buffers for data serialization. Excellent for complex data structures and demanding performance. * **Message Queues (e.g., RabbitMQ, Kafka):** Asynchronous communication. Useful for decoupling services and handling large volumes of requests. * **Scalability and Reliability:** Consider how your server will handle a large number of requests and potential failures. * **Error Handling:** Implement robust error handling to gracefully deal with API errors, network issues, and invalid requests. **Implementation Options** Here are a few implementation options, focusing on different MCP approaches: **1. REST (HTTP) based MCP Server (Python with Flask/FastAPI)** * **Language:** Python (popular for API development) * **Framework:** Flask (simple) or FastAPI (modern, asynchronous) ```python # FastAPI example from fastapi import FastAPI, HTTPException from youtube_transcript_api import YouTubeTranscriptApi app = FastAPI() @app.get("/transcript/{video_id}") async def get_transcript(video_id: str, lang: str = 'en'): """ Fetches the transcript for a YouTube video. Args: video_id: The YouTube video ID. lang: The desired language of the transcript (default: 'en'). Returns: A list of transcript entries (text, start, duration). """ try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[lang]) return transcript except Exception as e: raise HTTPException(status_code=500, detail=str(e)) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` **Explanation:** * **`YouTubeTranscriptApi`:** This is a Python library that simplifies interacting with the YouTube transcript API. Install it with `pip install youtube-transcript-api`. * **`FastAPI`:** A modern, high-performance web framework for building APIs. * **`/transcript/{video_id}`:** An endpoint that accepts the YouTube video ID as a path parameter. * **`lang`:** An optional query parameter to specify the desired language. * **Error Handling:** The `try...except` block catches potential errors and returns an HTTP 500 error with a descriptive message. * **`uvicorn`:** An ASGI server to run the FastAPI application. **To use this:** 1. Install dependencies: `pip install fastapi uvicorn youtube-transcript-api` 2. Run the server: `python your_script_name.py` 3. Access the API: `http://localhost:8000/transcript/VIDEO_ID` (replace `VIDEO_ID` with the actual YouTube video ID). You can also specify the language: `http://localhost:8000/transcript/VIDEO_ID?lang=fr` **2. gRPC based MCP Server (Python with gRPC)** * **Language:** Python * **Framework:** gRPC **Steps:** 1. **Define the Protocol Buffer (.proto) file:** This defines the service and message structure. ```protobuf syntax = "proto3"; package youtube_transcript; service TranscriptService { rpc GetTranscript (TranscriptRequest) returns (TranscriptResponse) {} } message TranscriptRequest { string video_id = 1; string language = 2; } message TranscriptResponse { repeated TranscriptEntry entries = 1; } message TranscriptEntry { string text = 1; double start = 2; double duration = 3; } ``` 2. **Generate gRPC code:** Use the `grpc_tools.protoc` compiler to generate Python code from the `.proto` file. ```bash python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto ``` 3. **Implement the gRPC server:** ```python # youtube_transcript_server.py import grpc from concurrent import futures from youtube_transcript_api import YouTubeTranscriptApi import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc class TranscriptServicer(youtube_transcript_pb2_grpc.TranscriptServiceServicer): def GetTranscript(self, request, context): try: transcript = YouTubeTranscriptApi.get_transcript(request.video_id, languages=[request.language]) entries = [] for entry in transcript: entries.append(youtube_transcript_pb2.TranscriptEntry( text=entry['text'], start=entry['start'], duration=entry['duration'] )) return youtube_transcript_pb2.TranscriptResponse(entries=entries) except Exception as e: context.abort(grpc.StatusCode.INTERNAL, str(e)) def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) youtube_transcript_pb2_grpc.add_TranscriptServiceServicer_to_server(TranscriptServicer(), server) server.add_insecure_port('[::]:50051') server.start() server.wait_for_termination() if __name__ == '__main__': serve() ``` 4. **Implement the gRPC client (example):** ```python # youtube_transcript_client.py import grpc import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc def get_transcript(video_id, language): with grpc.insecure_channel('localhost:50051') as channel: stub = youtube_transcript_pb2_grpc.TranscriptServiceStub(channel) request = youtube_transcript_pb2.TranscriptRequest(video_id=video_id, language=language) try: response = stub.GetTranscript(request) for entry in response.entries: print(f"[{entry.start:.2f} - {entry.start + entry.duration:.2f}] {entry.text}") except grpc.RpcError as e: print(f"Error: {e.details()}") if __name__ == '__main__': get_transcript("VIDEO_ID", "en") # Replace with a real video ID ``` **Explanation:** * **`.proto` file:** Defines the service (`TranscriptService`) and the messages (`TranscriptRequest`, `TranscriptResponse`, `TranscriptEntry`). * **`grpc_tools.protoc`:** Compiles the `.proto` file into Python code. * **`TranscriptServicer`:** Implements the `GetTranscript` method, which retrieves the transcript using `YouTubeTranscriptApi` and converts it into the gRPC response format. * **gRPC Client:** Connects to the server, sends a `TranscriptRequest`, and prints the received transcript entries. **To use this:** 1. Install dependencies: `pip install grpcio grpcio-tools protobuf youtube-transcript-api` 2. Compile the `.proto` file: `python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto` 3. Run the server: `python youtube_transcript_server.py` 4. Run the client: `python youtube_transcript_client.py` **3. Message Queue based MCP Server (Python with RabbitMQ/Kafka)** * **Language:** Python * **Message Queue:** RabbitMQ or Kafka **Conceptual Outline (RabbitMQ Example):** 1. **Producer (Client):** Sends a message to the queue with the video ID and language. 2. **Consumer (Server):** Listens to the queue, receives the message, fetches the transcript, and potentially publishes the transcript to another queue or stores it in a database. **RabbitMQ Example (Simplified):** * **Producer (Client):** ```python # producer.py import pika import json connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') message = {'video_id': 'VIDEO_ID', 'language': 'en'} # Replace with a real video ID channel.basic_publish(exchange='', routing_key='transcript_requests', body=json.dumps(message)) print(" [x] Sent %r" % message) connection.close() ``` * **Consumer (Server):** ```python # consumer.py import pika import json from youtube_transcript_api import YouTubeTranscriptApi connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') def callback(ch, method, properties, body): message = json.loads(body.decode('utf-8')) video_id = message['video_id'] language = message['language'] print(f" [x] Received request for video ID: {video_id}, language: {language}") try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[language]) # Process the transcript (e.g., store in a database, publish to another queue) print(f" [x] Transcript fetched successfully for {video_id}") # Example: Print the first few lines for i in range(min(5, len(transcript))): print(transcript[i]) except Exception as e: print(f" [x] Error fetching transcript: {e}") channel.basic_consume(queue='transcript_requests', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() ``` **Explanation:** * **RabbitMQ:** A message broker that allows asynchronous communication between services. * **`transcript_requests` queue:** The queue where the client sends requests for transcripts. * **Producer:** Sends a JSON message containing the video ID and language to the queue. * **Consumer:** Listens to the queue, retrieves the message, fetches the transcript using `YouTubeTranscriptApi`, and processes the transcript. * **`auto_ack=True`:** Automatically acknowledges the message after it's processed. Consider using manual acknowledgements for more robust error handling. **To use this:** 1. Install RabbitMQ: Follow the instructions on the RabbitMQ website. 2. Install dependencies: `pip install pika youtube-transcript-api` 3. Run the consumer: `python consumer.py` 4. Run the producer: `python producer.py` **Key Considerations and Best Practices** * **API Key Management:** Store your YouTube Data API key securely (e.g., environment variables, secrets management). Never hardcode it in your code. * **Rate Limiting:** The YouTube Data API has rate limits. Implement retry logic with exponential backoff to handle rate limit errors gracefully. Consider caching transcripts to reduce API calls. * **Error Handling:** Implement comprehensive error handling to catch API errors, network issues, and invalid requests. Log errors for debugging. * **Asynchronous Operations:** For gRPC and message queue implementations, use asynchronous operations (e.g., `asyncio` in Python) to improve performance and scalability. * **Data Validation:** Validate the input (video ID, language) to prevent errors and security vulnerabilities. * **Logging:** Use a logging library (e.g., `logging` in Python) to log important events and errors. * **Monitoring:** Monitor the performance of your server (e.g., request latency, error rates) to identify and address issues. * **Security:** If your server handles sensitive data, implement appropriate security measures (e.g., authentication, authorization, encryption). * **Scalability:** Design your server to be scalable to handle a large number of requests. Consider using a load balancer and multiple instances of your server. * **Deployment:** Choose a suitable deployment environment (e.g., cloud platform, containerization with Docker). * **Caching:** Implement caching mechanisms (e.g., Redis, Memcached) to store frequently accessed transcripts and reduce the load on the YouTube Data API. Consider using a cache invalidation strategy. * **Transcript Availability:** Not all YouTube videos have transcripts available. Handle cases where a transcript is not found. * **Language Support:** The `YouTubeTranscriptApi` library supports multiple languages. Allow users to specify the desired language. * **Transcript Types:** Consider supporting different types of transcripts (automatic, community, official). The `YouTubeTranscriptApi` library provides methods to access different transcript types. **Choosing the Right Approach** * **REST (HTTP):** Good for simple use cases and when you need a widely accessible API. Easy to implement and debug. * **gRPC:** Best for high-performance communication between microservices. Requires more setup but offers significant performance benefits. * **Message Queue:** Ideal for asynchronous processing and decoupling services. Useful for handling large volumes of requests and ensuring that requests are processed even if one service is temporarily unavailable. The best approach depends on your specific requirements and the overall architecture of your microservices. Start with REST if you're unsure, and then consider gRPC or message queues if you need better performance or scalability. Remember to prioritize security, error handling, and rate limiting in all implementations.

Brightsy MCP Server

Brightsy MCP Server

一个实现了模型上下文协议的服务器,用于将大型语言模型(LLM)连接到 Brightsy AI 代理,允许用户向这些代理传递消息并接收响应。

Sola MCP Server

Sola MCP Server

A stateless HTTP server implementing the Model Context Protocol (MCP) that enables applications to interact with Social Layer platform data including events, groups, profiles, and venues via standardized endpoints.

FridayAI

FridayAI

帮助完成任务的AI游戏伙伴。

Multi-Model Advisor

Multi-Model Advisor

决策模型委员会 / 决策模型理事会

DALL-E MCP Server

DALL-E MCP Server

An MCP (Model Context Protocol) server that allows generating, editing, and creating variations of images using OpenAI's DALL-E APIs.

Lighthouse MCP

Lighthouse MCP

一个模型上下文协议服务器,使 Claude 能够通过安全认证与您的 Lighthouse.one 加密货币投资组合数据进行交互和分析。

Documentation MCP Server with Python SDK

Documentation MCP Server with Python SDK

VOICEPEAK MCP Server

VOICEPEAK MCP Server

Enables text-to-speech synthesis using VOICEPEAK software with support for custom narrators, emotions, and pronunciation dictionaries. Allows generating and playing audio files from text with configurable voice parameters.

LI.FI MCP Server

LI.FI MCP Server

集成了 [LI.FI API]( 的 MCP 服务器

Facebook Ads Library MCP Server

Facebook Ads Library MCP Server

Enables searching and analyzing Facebook's public ads library to see what companies are currently running, including ad images/text, video links, and campaign insights.

Dynamics 365 MCP Server by CData

Dynamics 365 MCP Server by CData

Dynamics 365 MCP Server by CData

Node Terminal MCP

Node Terminal MCP

Enables AI agents to interact with terminal environments through multiple concurrent PTY sessions. Supports cross-platform terminal operations including command execution, session management, and real-time communication.