发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 12,392 个能力。

Meme MCP Server
一个简单的模型上下文协议服务器,允许人工智能模型使用 ImgFlip API 生成表情包图片,从而使用户能够通过文本提示创建表情包。 (Or, a slightly more formal translation:) 一个简易的模型上下文协议服务器,它允许人工智能模型利用 ImgFlip API 生成网络迷因图片,从而使用户能够通过文本提示来创作迷因。
ThemeParks.wiki API MCP Server
主题公园维基 API MCP 服务器

Japanese Text Analyzer
Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.
MCP SSH Server for Windsurf
用于 Windsurf 集成的 MCP SSH 服务器 (Yòng yú Windsurf jíchéng de MCP SSH fúwùqì)

MCP Tavily Search Server
集成了 Tavily 的搜索 API 与 LLM(大型语言模型),以提供高级的网络搜索功能,包括智能结果摘要、用于质量控制的域名过滤以及可配置的搜索参数。

S3 MCP Server
一个 Amazon S3 模型上下文协议服务器,允许像 Claude 这样的大型语言模型与 AWS S3 存储进行交互,提供用于列出存储桶、列出对象和检索对象内容的工具。

Oracle Eloqua MCP Server by CData
This read-only MCP Server allows you to connect to Oracle Eloqua data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp
Claude Web Scraper MCP
一个简单的 MCP 服务器,集成了 eGet 网页抓取工具和 Claude for Desktop。此连接器允许 Claude 通过您本地的 eGet API 抓取网页内容,从而可以直接在对话中进行网站的搜索、总结和分析。

MCP YouTube Server
一个用于下载、处理和管理 YouTube 内容的服务器,具有视频质量选择、格式转换和元数据提取等功能。
Gdb Mcp Server

Shortcut MCP Server
一个模型上下文协议服务器,可以与 Shortcut (前身为 Clubhouse) 项目管理工具进行交互,允许用户查看和搜索项目、故事、史诗和目标,以及通过自然语言创建新项目。

Perplexity AI MCP Server
这个服务器提供对 Perplexity AI API 的访问,从而能够在基于 MCP 的系统内进行聊天、搜索和文档检索。
ESA MCP Server
通过模型上下文协议启用与 esa.io API 的交互,支持文章搜索和检索,并提供兼容的 MCP 接口。

Software Planning Tool
通过模型上下文协议,管理任务、跟踪进度并创建详细的实施计划,从而促进交互式软件开发规划。

MCP Server Enhanced SSH
一个强大的 SSH 服务器,通过 TMUX 会话管理、多窗口支持和智能会话恢复,促进安全的远程命令执行,从而改善 AI 与人类的交互。

RevenueCat to Adapty Migration MCP
A Model Context Protocol server that helps users migrate subscription businesses from RevenueCat to Adapty through natural language interactions with LLMs like Claude Desktop.

Mysql_mcp_server_pro
已添加对 STDIO 模式和 SSE 模式的支持 已添加对多个 SQL 执行的支持,以“;”分隔 已添加基于表注释查询数据库表名和字段的功能 已添加 SQL 执行计划分析 已添加中文字段到拼音的转换

G-Search MCP
一个强大的 MCP 服务器,能够同时使用多个关键词进行并行 Google 搜索,在处理 CAPTCHA 验证码和模拟用户浏览模式的同时,提供结构化的搜索结果。
Serveur MCP pour n8n
镜子 (jìng zi)

MCP GameBoy Server
A Model Context Protocol server that enables LLMs to interact with a GameBoy emulator, providing tools for controlling the GameBoy, loading ROMs, and retrieving screen frames.
Google Custom Search Engine MCP Server
使用 Google 自定义搜索引擎启用搜索功能,允许用户输入搜索词并检索搜索结果的标题、链接和摘要,同时方便与其他工具集成以进行内容提取和高级搜索策略。

Notion MCP Server
一个模型上下文协议服务器,为人工智能模型提供一个标准化接口,用于访问、查询和修改 Notion 工作区中的内容。
mcp-server-cli
模型上下文协议服务器,用于运行 shell 脚本或命令。 (Móxíng shàngxiàwén xiéyì fúwùqì, yòng yú yùnxíng shell jiǎoběn huò mìnglìng.)
MCP OpenMetadata
提供 OpenMetadata API 的 MCP 服务器
Backstage MCP
一个使用 quarkus-backstage 的简单后台 MCP 服务器。

ChatSum
Okay, I can help with that. To summarize WeChat messages, I need you to provide me with the text of the messages. Please paste the WeChat conversation here, and I will do my best to provide a concise and accurate summary in Chinese. For example, you can paste something like this: **Example Input:** ``` Person A: Hey, are you free for lunch tomorrow? Person B: Yeah, I think so. Where do you want to go? Person A: How about that new Italian place downtown? Person B: Sounds good! What time? Person A: Noon? Person B: Perfect! See you then. ``` Then I will provide a summary in Chinese. **The more context you give me, the better the summary will be.** For example, if you tell me the topic of the conversation beforehand, I can focus the summary on that. Looking forward to helping you!
MCP Etherscan Server
镜子 (jìng zi)

AutoCAD LT AutoLISP MCP Server
Enables natural language control of AutoCAD LT through AutoLISP code generation and execution, allowing users to create engineering drawings with conversational prompts.
mcp-c
用 C 语言编写的 MCP 服务器框架,高效轻松地开发。
Remote MCP Server on Cloudflare