发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 16,266 个能力。

全部16,266
Fabric MCP Agent

Fabric MCP Agent

Enables natural language querying of Microsoft Fabric Data Warehouses with intelligent SQL generation, metadata exploration, and business-friendly result summarization. Features two-layer architecture with MCP-compliant server and agentic AI reasoning for production-ready enterprise data access.

Spreadsheet MCP Server

Spreadsheet MCP Server

提供一个模型上下文协议 (MCP) 服务器,使大型语言模型 (LLM) 能够直接访问和与 Google 表格数据交互。

Binary Reader MCP

Binary Reader MCP

一个用于读取和分析二进制文件的模型上下文协议服务器,初步支持虚幻引擎资源文件(.uasset)。

CCXT MCP Server

CCXT MCP Server

镜子 (jìng zi)

Remote MCP Server

Remote MCP Server

A Cloudflare Workers-based Model Context Protocol server that enables AI assistants like Claude to access external tools via OAuth authentication.

SingleStore MCP Server

SingleStore MCP Server

一个用于与 SingleStore 数据库交互的服务器,支持表格查询、模式描述和 ER 图生成,并提供安全的 SSL 支持和 TypeScript 安全性。

systemprompt-mcp-server

systemprompt-mcp-server

A production-ready Model Context Protocol server that integrates with Reddit, demonstrating complete MCP specification with OAuth 2.1, sampling, elicitation, structured data validation, and real-time notifications.

Japanese Text Analyzer

Japanese Text Analyzer

Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.

MCP_server_fastapi

MCP_server_fastapi

MkDocs MCP Search Server

MkDocs MCP Search Server

Enables Claude and other LLMs to search through any published MkDocs documentation site using the Lunr.js search engine, allowing the AI to find and summarize relevant documentation for users.

Basecamp MCP Server by CData

Basecamp MCP Server by CData

This read-only MCP Server allows you to connect to Basecamp data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

Testomatio MCP Server

Testomatio MCP Server

A Model Context Protocol server that enables AI assistants like Cursor to interact with Testomatio test management platform, allowing users to query test cases, runs, and plans through natural language.

Meme MCP Server

Meme MCP Server

一个简单的模型上下文协议服务器,允许人工智能模型使用 ImgFlip API 生成表情包图片,从而使用户能够通过文本提示创建表情包。 (Or, a slightly more formal translation:) 一个简易的模型上下文协议服务器,它允许人工智能模型利用 ImgFlip API 生成网络迷因图片,从而使用户能够通过文本提示来创作迷因。

Edgar MCP Service

Edgar MCP Service

Enables deep analysis of SEC EDGAR filings through universal company search, document content extraction, and advanced filing search capabilities. Provides AI-ready access to business descriptions, risk factors, financial statements, and full-text search across any public company's SEC documents.

ThemeParks.wiki API MCP Server

ThemeParks.wiki API MCP Server

主题公园维基 API MCP 服务器

MCP SSH Server for Windsurf

MCP SSH Server for Windsurf

用于 Windsurf 集成的 MCP SSH 服务器 (Yòng yú Windsurf jíchéng de MCP SSH fúwùqì)

S3 MCP Server

S3 MCP Server

一个 Amazon S3 模型上下文协议服务器,允许像 Claude 这样的大型语言模型与 AWS S3 存储进行交互,提供用于列出存储桶、列出对象和检索对象内容的工具。

MCP Etherscan Server

MCP Etherscan Server

镜子 (jìng zi)

Oracle Eloqua MCP Server by CData

Oracle Eloqua MCP Server by CData

This read-only MCP Server allows you to connect to Oracle Eloqua data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

Claude Telemetry MCP

Claude Telemetry MCP

Provides comprehensive telemetry and usage analytics for Claude Code sessions, including token usage tracking, cost monitoring, and tool usage patterns. Enables users to monitor their Claude usage with detailed metrics, warnings, and trend analysis.

MCP Kubernetes

MCP Kubernetes

Enables advanced management of Kubernetes clusters through natural language interactions. Supports querying, managing, and monitoring pods, deployments, nodes, and logs across multiple contexts and namespaces.

arXiv MCP Server

arXiv MCP Server

Enables querying and discovering the latest arXiv papers by category or keyword, providing structured metadata including titles, authors, summaries, and links for research assistance and literature review workflows.

Claude Web Scraper MCP

Claude Web Scraper MCP

一个简单的 MCP 服务器,集成了 eGet 网页抓取工具和 Claude for Desktop。此连接器允许 Claude 通过您本地的 eGet API 抓取网页内容,从而可以直接在对话中进行网站的搜索、总结和分析。

MCP YouTube Server

MCP YouTube Server

一个用于下载、处理和管理 YouTube 内容的服务器,具有视频质量选择、格式转换和元数据提取等功能。

Shortcut MCP Server

Shortcut MCP Server

一个模型上下文协议服务器,可以与 Shortcut (前身为 Clubhouse) 项目管理工具进行交互,允许用户查看和搜索项目、故事、史诗和目标,以及通过自然语言创建新项目。

BuildAutomata Memory MCP Server

BuildAutomata Memory MCP Server

Provides AI agents with persistent, searchable memory that survives across conversations using semantic search, temporal versioning, and smart organization. Enables long-term context retention and cross-session continuity for AI assistants.

Perplexity AI MCP Server

Perplexity AI MCP Server

这个服务器提供对 Perplexity AI API 的访问,从而能够在基于 MCP 的系统内进行聊天、搜索和文档检索。

ESA MCP Server

ESA MCP Server

通过模型上下文协议启用与 esa.io API 的交互,支持文章搜索和检索,并提供兼容的 MCP 接口。

Software Planning Tool

Software Planning Tool

通过模型上下文协议,管理任务、跟踪进度并创建详细的实施计划,从而促进交互式软件开发规划。

MCP Server Enhanced SSH

MCP Server Enhanced SSH

一个强大的 SSH 服务器,通过 TMUX 会话管理、多窗口支持和智能会话恢复,促进安全的远程命令执行,从而改善 AI 与人类的交互。