发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 25,902 个能力。

全部25,902
Perplexity AI MCP Server

Perplexity AI MCP Server

这个服务器提供对 Perplexity AI API 的访问,从而能够在基于 MCP 的系统内进行聊天、搜索和文档检索。

MCP Server Enhanced SSH

MCP Server Enhanced SSH

一个强大的 SSH 服务器,通过 TMUX 会话管理、多窗口支持和智能会话恢复,促进安全的远程命令执行,从而改善 AI 与人类的交互。

Company API MCP Server Template

Company API MCP Server Template

A template for building Model Context Protocol servers that connect to company REST APIs using FastMCP, providing authentication handling, error management, and example tools for common API operations.

MCP J-Link Server

MCP J-Link Server

Enables AI to directly control SEGGER J-Link embedded debug probes via the Model Context Protocol for debugging and firmware management. Users can perform tasks like reading registers, analyzing memory, flashing firmware, and tracking RTT logs using natural language commands.

Japanese Text Analyzer

Japanese Text Analyzer

Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.

Google Custom Search Engine MCP Server

Google Custom Search Engine MCP Server

使用 Google 自定义搜索引擎启用搜索功能,允许用户输入搜索词并检索搜索结果的标题、链接和摘要,同时方便与其他工具集成以进行内容提取和高级搜索策略。

mcp-server-cli

mcp-server-cli

模型上下文协议服务器,用于运行 shell 脚本或命令。 (Móxíng shàngxiàwén xiéyì fúwùqì, yòng yú yùnxíng shell jiǎoběn huò mìnglìng.)

mcp-c

mcp-c

用 C 语言编写的 MCP 服务器框架,高效轻松地开发。

Claude Telemetry MCP

Claude Telemetry MCP

Provides comprehensive telemetry and usage analytics for Claude Code sessions, including token usage tracking, cost monitoring, and tool usage patterns. Enables users to monitor their Claude usage with detailed metrics, warnings, and trend analysis.

MCP Kubernetes

MCP Kubernetes

Enables advanced management of Kubernetes clusters through natural language interactions. Supports querying, managing, and monitoring pods, deployments, nodes, and logs across multiple contexts and namespaces.

MCP Câmara BR

MCP Câmara BR

Enables interaction with the Brazilian Chamber of Deputies Open Data API, providing access to information about legislators, legislative proposals, voting records, events, committees, and parliamentary activities through 57 typed and validated tools.

Shortcut MCP Server

Shortcut MCP Server

一个模型上下文协议服务器,可以与 Shortcut (前身为 Clubhouse) 项目管理工具进行交互,允许用户查看和搜索项目、故事、史诗和目标,以及通过自然语言创建新项目。

ESA MCP Server

ESA MCP Server

通过模型上下文协议启用与 esa.io API 的交互,支持文章搜索和检索,并提供兼容的 MCP 接口。

Software Planning Tool

Software Planning Tool

通过模型上下文协议,管理任务、跟踪进度并创建详细的实施计划,从而促进交互式软件开发规划。

RevenueCat to Adapty Migration MCP

RevenueCat to Adapty Migration MCP

A Model Context Protocol server that helps users migrate subscription businesses from RevenueCat to Adapty through natural language interactions with LLMs like Claude Desktop.

Mysql_mcp_server_pro

Mysql_mcp_server_pro

已添加对 STDIO 模式和 SSE 模式的支持 已添加对多个 SQL 执行的支持,以“;”分隔 已添加基于表注释查询数据库表名和字段的功能 已添加 SQL 执行计划分析 已添加中文字段到拼音的转换

WordPress MCP Server

WordPress MCP Server

An MCP server that enables ChatGPT to manage WordPress content by creating, updating, retrieving, and deleting posts via the WordPress REST API. It uses FastAPI and Cloudflare Tunnels to provide a secure interface for natural language site administration.

EspoCRM-MCP

EspoCRM-MCP

Open source MCP server for EspoCRM

Serveur MCP pour n8n

Serveur MCP pour n8n

镜子 (jìng zi)

MCP GameBoy Server

MCP GameBoy Server

A Model Context Protocol server that enables LLMs to interact with a GameBoy emulator, providing tools for controlling the GameBoy, loading ROMs, and retrieving screen frames.

ArtifactHub MCP Server

ArtifactHub MCP Server

Enables interaction with Helm charts on ArtifactHub by providing tools to retrieve chart metadata, default values, and templates. It supports fuzzy searching within values and templates to simplify the discovery and analysis of Kubernetes packages.

Shopify MCP Server

Shopify MCP Server

Enables interaction with Shopify store data via the GraphQL Admin API for managing products, customers, and orders. It allows users to search, retrieve, create, and update store records through natural language commands.

Notion MCP Server

Notion MCP Server

一个模型上下文协议服务器,为人工智能模型提供一个标准化接口,用于访问、查询和修改 Notion 工作区中的内容。

Knowledge Retrieval Server

Knowledge Retrieval Server

A BM25-based MCP server that enables document search and retrieval across structured domains of knowledge content, allowing Claude to search and reference documentation when answering questions.

OpenF1 MCP Server

OpenF1 MCP Server

Enables access to Formula 1 data from the openF1.org API, including driver information, race results, lap times, telemetry, pit stops, weather conditions, and live position data across multiple seasons.

Image MCP Server

Image MCP Server

An MCP server that provides AI image generation capabilities using OpenAI and Replicate APIs with support for customizable prompts and dimensions. It features specialized tools for generating square, landscape, and portrait images through simple natural language commands.

Gdb Mcp Server

Gdb Mcp Server

FogBugz MCP Server

FogBugz MCP Server

Enables interaction with FogBugz issue tracking system through its XML API, allowing users to search, create, update, and manage cases, comments, and attachments directly from their MCP client.

GeoFS MCP Server

GeoFS MCP Server

一个服务器,它允许人工智能模型通过标准化的接口来控制和与 GeoFS 浏览器飞行模拟器中的飞机进行交互。

MCP OpenMetadata

MCP OpenMetadata

提供 OpenMetadata API 的 MCP 服务器