发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 22,825 个能力。
XMI MCP Server
An MCP server for querying and exploring SysML XMI models, specifically designed for MTConnect model exports. It allows users to search for packages, classes, and enumerations while providing tools for analyzing documentation and inheritance hierarchies.
MCP Test Scratch Server
A Flask-based MCP server designed for testing deployment on Google App Engine. Provides a deeplink checking endpoint that accepts flattened JSON parameters and forwards them as nested objects to external APIs.
bigquery-mcp
一个 MCP 服务器,用于帮助 AI 代理检查 BigQuery 仓库的内容。 (Or, more formally:) 一个 MCP 服务器,旨在帮助人工智能代理检查 BigQuery 数据仓库的内容。
Discorevy local MCP Servers. Specification.
MCP 服务器发现规范
Tongyi Wanxiang MCP Server
一个基于 TypeScript 的模型上下文协议服务器,它使大型语言模型能够直接调用阿里云的通义万相文生图 API。
Twelve Data MCP Server
Provides integration with Twelve Data API to access financial market data including historical time series, real-time quotes, and instrument metadata for stocks, forex pairs, and cryptocurrencies.
@mcp/openverse
An MCP server that enables searching and fetching openly-licensed images from Openverse with features like filtering by license type, getting image details, and finding essay-specific illustrations.
WooCommerce Enterprise MCP Suite
Provides 115+ MCP tools for comprehensive WooCommerce store management including multi-store operations, bulk processing, inventory sync, order management, and customer analytics. Features enterprise-level safety controls with dry-run mode, automatic backups, and rollback capabilities.
Obsidian Tools MCP Server
Enables comprehensive management of Obsidian vaults with full CRUD operations, advanced search, link/tag extraction, backlinks discovery, frontmatter editing, and template-based note creation through natural language.
Creative Ideation MCP Server
Utilizes the Google Gemini API to generate context-specific categories and diverse options for creative brainstorming. It features a random sampling function to help users bypass predictable AI patterns and discover unexpected, innovative ideas.
MCP Server
Facilitates multi-client processing for high-performance operations within the DigitalFate framework, enabling advanced automation through task orchestration and agent integration.
Hong Kong Transportation MCP Server
An MCP server providing access to Hong Kong transportation data, including passenger traffic statistics at control points and real-time bus arrival information for KMB and Long Win Bus services.
LPDP MCP Server
Enables users to query information about LPDP scholarship financial disbursement using RAG with Pinecone vector search and Gemini 2.0 Flash, answering questions about funding components, deadlines, living allowances, and required documents.
Gemini Pro MCP Server
Enables Claude Desktop to generate text and analyze images using Google's Gemini Pro API. Provides seamless integration between Claude and Gemini's AI capabilities through natural language commands.
Roo MCP サーバー
Local Services MCP Server
A Multi-Agent Conversation Protocol Server that provides access to Google's Local Services API, enabling interaction with local service businesses information through natural language commands.
Typesense MCP Server
A server that enables vector and keyword search capabilities in Typesense databases through the Model Context Protocol, providing tools for collection management, document operations, and search functionality.
🚀 MCP File System API
Okay, here's a Python code example using Flask to create a simple server that integrates with a (placeholder) LLaMA model for summarization. This example focuses on the structure and integration points. **Important considerations and placeholders are marked with comments.** ```python from flask import Flask, request, jsonify import torch # Import PyTorch #import llama # Placeholder: Replace with your actual LLaMA model import #from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # Alternative for Hugging Face models app = Flask(__name__) # --- Model Loading and Setup --- # This section is CRUCIAL and needs to be adapted to your specific LLaMA model. # Option 1: If you have a custom LLaMA implementation # model = llama.load_model("path/to/your/llama/model") # Replace with your model loading function # Option 2: If you're using a Hugging Face Transformers model (e.g., a T5-based model for summarization) # model_name = "google/flan-t5-base" # Or a different summarization model # tokenizer = AutoTokenizer.from_pretrained(model_name) # model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Option 3: Placeholder - Replace with your actual model loading def load_model(): """Placeholder function to simulate loading a model.""" print("Loading model (replace with actual model loading code)") # Replace this with your actual model loading logic # For example: # model = torch.load("path/to/your/model.pth") # model.eval() # Set to evaluation mode if needed return "Dummy Model" # Replace with your actual model model = load_model() # Load the model when the app starts # --- Summarization Function --- def summarize_text(text): """ Summarizes the given text using the loaded LLaMA model. Args: text: The input text to summarize. Returns: The summarized text. """ print(f"Summarizing text: {text[:50]}...") # Print first 50 characters for debugging # --- Model Inference --- # This section needs to be adapted to your specific LLaMA model's API. # Option 1: Custom LLaMA model # summary = model.summarize(text) # Replace with your model's summarization function # Option 2: Hugging Face Transformers model # inputs = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True) # summary_ids = model.generate(inputs.input_ids, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) # summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) # Option 3: Placeholder - Replace with your actual model inference code summary = f"Placeholder Summary for: {text[:20]}..." # Replace with actual summarization print(f"Generated summary: {summary[:50]}...") # Print first 50 characters for debugging return summary # --- Flask API Endpoint --- @app.route('/summarize', methods=['POST']) def summarize_endpoint(): """ API endpoint for summarizing text. Expects a JSON payload with a 'text' field. """ try: data = request.get_json() text = data.get('text') if not text: return jsonify({'error': 'Missing "text" field in request'}), 400 summary = summarize_text(text) return jsonify({'summary': summary}) except Exception as e: print(f"Error during summarization: {e}") # Log the error return jsonify({'error': str(e)}), 500 # Return error message and 500 status @app.route('/health', methods=['GET']) def health_check(): """Simple health check endpoint.""" return jsonify({'status': 'ok'}) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Make sure debug is False in production ``` Key improvements and explanations: * **Clear Placeholders:** The code uses `Placeholder` comments extensively to highlight where you *must* replace the example code with your actual LLaMA model integration. This is the most important part. * **Error Handling:** Includes `try...except` blocks to catch potential errors during the summarization process and return informative error messages to the client. This is crucial for debugging and production stability. * **Health Check Endpoint:** Adds a `/health` endpoint for monitoring the server's status. This is essential for deployment and monitoring. * **Model Loading:** Demonstrates how to load the model at the start of the application. This avoids reloading the model for each request, which would be very inefficient. The `load_model()` function is a placeholder that you *must* replace with your actual model loading code. * **Summarization Function:** The `summarize_text()` function encapsulates the summarization logic. This makes the code more modular and easier to test. Again, the model inference part is a placeholder. * **Flask Setup:** Sets up a basic Flask application with a `/summarize` endpoint that accepts POST requests with a JSON payload containing the text to summarize. * **JSON Handling:** Uses `request.get_json()` to properly parse the JSON payload from the request. * **Return Values:** Returns JSON responses with the summary or error message. * **Logging:** Includes `print` statements for debugging. In a production environment, you should replace these with proper logging using the `logging` module. * **Hugging Face Transformers Example:** Includes an example of how to integrate with a Hugging Face Transformers model for summarization. This is a common way to use pre-trained models. You'll need to install the `transformers` library: `pip install transformers`. * **`host='0.0.0.0'`:** This makes the server accessible from outside the local machine. Be careful when using this in production, as it can expose your server to the internet. * **`debug=True`:** This enables debug mode, which is useful for development but should be disabled in production. * **Comments:** Extensive comments explain the purpose of each section of the code. **How to Use:** 1. **Install Flask:** `pip install Flask` 2. **Install PyTorch:** `pip install torch` (if you are using PyTorch) 3. **Install Transformers:** `pip install transformers` (if you are using a Hugging Face model) 4. **Replace Placeholders:** **This is the most important step.** Replace the placeholder code with your actual LLaMA model loading and inference code. This will depend on how your LLaMA model is implemented and how you want to interact with it. 5. **Run the Application:** `python your_script_name.py` 6. **Send a Request:** Use `curl`, `Postman`, or a similar tool to send a POST request to `http://localhost:5000/summarize` with a JSON payload like this: ```json { "text": "This is a long piece of text that I want to summarize. It contains many sentences and paragraphs. The goal is to reduce the text to its most important points." } ``` **Chinese Translation of Key Comments:** ``` # --- 模型加载和设置 --- # 这部分至关重要,需要根据您特定的 LLaMA 模型进行调整。 # --- 模型推理 --- # 这部分需要根据您特定的 LLaMA 模型的 API 进行调整。 # 替换占位符:这是最重要的步骤。将占位符代码替换为您实际的 LLaMA 模型加载和推理代码。这将取决于您的 LLaMA 模型是如何实现的,以及您希望如何与它交互。 ``` **Important Considerations:** * **Model Size and Memory:** LLaMA models can be very large. Make sure you have enough memory to load and run the model. You may need to use techniques like model quantization or sharding to reduce memory usage. * **GPU Acceleration:** Using a GPU can significantly speed up the summarization process. Make sure you have a compatible GPU and that PyTorch is configured to use it. * **Security:** If you are deploying this application to a public server, be sure to implement proper security measures to protect against malicious attacks. * **Rate Limiting:** Implement rate limiting to prevent abuse of the API. * **Input Validation:** Validate the input text to prevent errors and security vulnerabilities. * **Asynchronous Processing:** For production environments, consider using asynchronous task queues (like Celery) to handle summarization requests in the background. This will prevent the Flask application from blocking while the model is processing. * **Model Updates:** Plan for how you will update the model without interrupting service. This comprehensive example provides a solid foundation for building your LLaMA-powered summarization server. Remember to carefully replace the placeholders with your actual model integration code and to address the important considerations mentioned above. Good luck!
CereBro
一个与模型无关的、用于 .Net 的 MCP 客户端-服务器
Clear Thought MCP Server
Provides systematic thinking tools including mental models, design patterns, debugging approaches, and collaborative reasoning frameworks to enhance problem-solving and decision-making capabilities.
mcp-server-email MCP server
I am a language model, and I don't have an email address. I am accessed through interfaces provided by Google.
OCI Core Services FastMCP Server
A dedicated server for Oracle Cloud Infrastructure (OCI) Core Services that enables management of compute instances and network operations with LLM-friendly structured responses.
prometheus-mcp-server
A TypeScript-based MCP server that enables users to interact with Prometheus metrics using PromQL queries and discovery tools. It allows LLMs to retrieve time-series data, metadata, alerts, and system status directly from a Prometheus instance.
Agent Identity Protocol (AIP)
Provides cryptographic identity and signing capabilities for AI agents, enabling them to create persistent identities, sign actions with private keys, and allow external systems to verify the authenticity and provenance of agent-initiated operations.
Artur's Model Context Protocol servers
MCP 服务器 (MCP fúwùqì)
Case Study Generator MCP Server
Processes documents, analyzes GitHub repositories, and researches companies using local Gemma3 AI to extract structured business insights for generating compelling case studies.
BlenderMCP
Connects Claude AI to Blender through the Model Context Protocol, enabling AI-assisted 3D modeling, scene creation, object manipulation, and material control. Supports downloading assets from Poly Haven and generating 3D models through Hyper3D Rodin.
Mcp Use
Moonshot MCP Server Gateway
A lightweight gateway server that provides a unified connection entry point for accessing multiple MCP servers, supporting various protocols including Network and Local Transports.
Code Graph Knowledge System
Transforms code repositories and development documentation into a queryable Neo4j knowledge graph, enabling AI assistants to perform intelligent code analysis, dependency mapping, impact assessment, and automated documentation generation across 15+ programming languages.