发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 16,376 个能力。

全部16,376
Binary Ninja MCP Server

Binary Ninja MCP Server

一个模型上下文协议服务器,使大型语言模型能够与 Binary Ninja 交互,以执行逆向工程任务,例如查看汇编代码、反编译代码、重命名函数和添加注释。

Larkrs Mcp

Larkrs Mcp

MCP Tekmetric

MCP Tekmetric

A Model Context Protocol server that allows AI assistants to interact with Tekmetric data, enabling users to query appointment details, vehicle information, repair order status, and parts inventory through natural language.

MCP Manager

MCP Manager

一个灵活的服务器,能够实现 AI 模型和工具之间的通信,支持多个 MCP 服务器,并且兼容 Claude、MCP Dockmaster 和其他 MCP 客户端。

hny-mcp

hny-mcp

用于与 Honeycomb 可观测性数据交互的服务器。该服务器使像 Claude 这样的大型语言模型 (LLM) 能够直接分析和查询您的 Honeycomb 数据集。

Memory Box MCP Server

Memory Box MCP Server

Cline MCP 集成,允许用户保存、搜索和格式化记忆,并具备语义理解能力,提供使用向量嵌入来存储和检索信息,以实现基于含义的搜索的工具。

Fabric MCP Agent

Fabric MCP Agent

Enables natural language querying of Microsoft Fabric Data Warehouses with intelligent SQL generation, metadata exploration, and business-friendly result summarization. Features two-layer architecture with MCP-compliant server and agentic AI reasoning for production-ready enterprise data access.

Spreadsheet MCP Server

Spreadsheet MCP Server

提供一个模型上下文协议 (MCP) 服务器,使大型语言模型 (LLM) 能够直接访问和与 Google 表格数据交互。

MCP-toolhouse

MCP-toolhouse

toolhouse.ai 的 MCP 服务器。与官方服务器不同,它不依赖于外部 LLM。

CCXT MCP Server

CCXT MCP Server

镜子 (jìng zi)

SingleStore MCP Server

SingleStore MCP Server

一个用于与 SingleStore 数据库交互的服务器,支持表格查询、模式描述和 ER 图生成,并提供安全的 SSL 支持和 TypeScript 安全性。

IDA-MCP

IDA-MCP

Enables interaction with multiple IDA Pro instances through MCP, allowing users to list functions, search instances, and manage reverse engineering analysis across different binary files. Supports multi-instance coordination with automatic discovery and tool forwarding between IDA Pro sessions.

UUID MCP Server

UUID MCP Server

MCP_server_fastapi

MCP_server_fastapi

MkDocs MCP Search Server

MkDocs MCP Search Server

Enables Claude and other LLMs to search through any published MkDocs documentation site using the Lunr.js search engine, allowing the AI to find and summarize relevant documentation for users.

@f4ww4z/mcp-mysql-server

@f4ww4z/mcp-mysql-server

镜子 (jìng zi)

MCP Apple Notes

MCP Apple Notes

一个模型上下文协议服务器,能够对 Apple Notes 内容进行语义搜索和检索,从而允许 AI 助手使用设备上的嵌入来访问、搜索和创建笔记。

Terraform Cloud MCP Server

Terraform Cloud MCP Server

Enables AI assistants to interact with Terraform Cloud workspaces and runs, including checking run status, listing workspaces, and retrieving detailed information about workspaces and runs.

Meme MCP Server

Meme MCP Server

一个简单的模型上下文协议服务器,允许人工智能模型使用 ImgFlip API 生成表情包图片,从而使用户能够通过文本提示创建表情包。 (Or, a slightly more formal translation:) 一个简易的模型上下文协议服务器,它允许人工智能模型利用 ImgFlip API 生成网络迷因图片,从而使用户能够通过文本提示来创作迷因。

ThemeParks.wiki API MCP Server

ThemeParks.wiki API MCP Server

主题公园维基 API MCP 服务器

MCP SSE demo

MCP SSE demo

好的,这是将 "demo of MCP SSE server limitations using the bun runtime" 翻译成中文的几种方式,根据不同的侧重点略有不同: **1. 比较直接的翻译:** * **使用 Bun 运行时演示 MCP SSE 服务器的局限性** **2. 更强调 "演示" 的含义:** * **一个使用 Bun 运行时展示 MCP SSE 服务器局限性的演示程序** **3. 更口语化,更像标题:** * **Bun 运行时下的 MCP SSE 服务器局限性演示** **4. 如果你想强调这个演示是为了发现或理解局限性:** * **利用 Bun 运行时探索 MCP SSE 服务器的局限性演示** 选择哪个翻译取决于你想要表达的具体含义。 一般来说,第一个翻译 "使用 Bun 运行时演示 MCP SSE 服务器的局限性" 比较通用,适合大多数情况。

MCP SSH Server for Windsurf

MCP SSH Server for Windsurf

用于 Windsurf 集成的 MCP SSH 服务器 (Yòng yú Windsurf jíchéng de MCP SSH fúwùqì)

Textin MCP Server

Textin MCP Server

A server that enables OCR capabilities to recognize text from images, PDFs, and Word documents, convert them to Markdown, and extract key information.

Binary Reader MCP

Binary Reader MCP

一个用于读取和分析二进制文件的模型上下文协议服务器,初步支持虚幻引擎资源文件(.uasset)。

systemprompt-mcp-server

systemprompt-mcp-server

A production-ready Model Context Protocol server that integrates with Reddit, demonstrating complete MCP specification with OAuth 2.1, sampling, elicitation, structured data validation, and real-time notifications.

Basecamp MCP Server by CData

Basecamp MCP Server by CData

This read-only MCP Server allows you to connect to Basecamp data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

BuildAutomata Memory MCP Server

BuildAutomata Memory MCP Server

Provides AI agents with persistent, searchable memory that survives across conversations using semantic search, temporal versioning, and smart organization. Enables long-term context retention and cross-session continuity for AI assistants.

Testomatio MCP Server

Testomatio MCP Server

A Model Context Protocol server that enables AI assistants like Cursor to interact with Testomatio test management platform, allowing users to query test cases, runs, and plans through natural language.

Japanese Text Analyzer

Japanese Text Analyzer

Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.

MCP Tavily Search Server

MCP Tavily Search Server

集成了 Tavily 的搜索 API 与 LLM(大型语言模型),以提供高级的网络搜索功能,包括智能结果摘要、用于质量控制的域名过滤以及可配置的搜索参数。