发现优秀的 MCP 服务器
通过 MCP 服务器扩展您的代理能力,拥有 14,536 个能力。
A Cloud Automator MCP server
一个非官方的 MCP 服务器,用于使用 Cloud Automator REST API。

code-to-tree
code-to-tree

MCP Selenium Server
A Model Context Protocol server implementation that enables browser automation through standardized MCP clients, supporting features like navigation, element interaction, and screenshots across Chrome, Firefox, and Edge browsers.
CodeSynapse
一个 MCP(模型上下文协议)服务器,它与语言服务器协议(LSP)集成,以便将代码库中丰富的语义信息暴露给 LLM 代码代理。
AI MCP Portal
aimcp 的门户网站 (aimcp de mén hù wǎng zhàn)
uv-mcp
用于内省 Python 环境的 MCP 服务器

EPICS MCP Server
A Python-based server that interacts with EPICS process variables, allowing users to retrieve PV values, set PV values, and fetch detailed information about PVs through a standardized interface.

ABS MCP Server
An MCP server that provides AI assistants with access to Australian Bureau of Statistics data through the SDMX-ML API, enabling statistical data querying and analysis.

Vitest MCP Server
AI-optimized Vitest interface that provides structured test output, visual debugging context, and intelligent coverage analysis for more effective AI assistance with testing.
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests to start the scraping process and stream the results back to the client. 3. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs from the server to the client in real-time as they are found. This is more efficient than repeatedly polling the server. 4. **MCP (Management Control Plane):** The MCP part is a bit more abstract. It implies you have a system for managing and controlling the scraping process. This could involve: * Configuration: Specifying the target URL, selectors for extracting URLs, etc. * Monitoring: Tracking the progress of the scraping job. * Error Handling: Managing errors that occur during scraping. * Scaling: Distributing the scraping workload across multiple instances. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // In-memory storage for SSE connections (replace with a more robust solution for production) const sseClients = []; app.get('/sse', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const clientId = Date.now(); // Unique ID for the client const newClient = { id: clientId, res, }; sseClients.push(newClient); console.log(`${clientId} Connection open`); req.on('close', () => { console.log(`${clientId} Connection closed`); sseClients = sseClients.filter((client) => client.id !== clientId); }); }); function sendSSE(data) { sseClients.forEach((client) => { client.res.write(`data: ${JSON.stringify(data)}\n\n`); }); } app.post('/scrape', async (req, res) => { const { url, selector } = req.body; // Get URL and selector from request body if (!url || !selector) { return res.status(400).send('URL and selector are required.'); } console.log(`Starting scrape for ${url} with selector ${selector}`); try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.locator(selector).evaluateAll(links => links.map(link => link.href)); for (const link of links) { sendSSE({ url: link }); // Send each URL via SSE } await browser.close(); console.log(`Scrape complete for ${url}`); res.status(200).send('Scrape started and URLs are being streamed.'); } catch (error) { console.error('Scrape error:', error); sendSSE({ error: error.message }); // Send error via SSE res.status(500).send('Scrape failed.'); } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple Client to Receive SSE)** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/sse'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); } else if (data.error) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.error}`; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); }; </script> </body> </html> ``` **4. Running the Example** 1. **Start the Server:** `node server.js` 2. **Open `client.html`** in your browser. 3. **Send a POST request to `/scrape`:** You can use `curl`, `Postman`, or a similar tool. For example: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "selector": "a"}' http://localhost:3000/scrape ``` Replace `"https://www.example.com"` with the URL you want to scrape and `"a"` with the CSS selector for the links you want to extract. **Explanation** * **`server.js`:** * Sets up an Express.js server. * `/sse` endpoint: Handles SSE connections. It sets the correct headers for SSE and keeps track of connected clients. * `sendSSE(data)`: Sends data to all connected SSE clients. * `/scrape` endpoint: * Receives the target URL and CSS selector from the request body. * Launches a Playwright browser. * Navigates to the URL. * Uses `page.locator(selector).evaluateAll()` to extract the `href` attributes of all elements matching the selector. * Sends each URL back to the client via SSE. * Handles errors and sends error messages via SSE. * **`client.html`:** * Creates an `EventSource` object to connect to the `/sse` endpoint. * Listens for `message` events from the server. * Parses the JSON data and displays the URLs in a list. * Handles errors. **Important Considerations for Production (MCP)** * **Configuration Management:** Instead of hardcoding the URL and selector in the `curl` command, you'd typically store them in a database or configuration file. Your MCP would provide an interface for managing these configurations. * **Job Queuing:** Use a message queue (e.g., RabbitMQ, Kafka) to queue scraping jobs. This allows you to handle a large number of requests without overloading the server. * **Scaling:** Run multiple instances of the scraping server behind a load balancer. The message queue will distribute the jobs across the instances. * **Monitoring:** Use a monitoring system (e.g., Prometheus, Grafana) to track the performance of the scraping servers, the number of jobs in the queue, and any errors that occur. * **Error Handling:** Implement robust error handling and retry mechanisms. For example, if a scraping job fails, you might retry it a few times before giving up. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or other data store. * **Authentication/Authorization:** Secure your MCP endpoints with authentication and authorization to prevent unauthorized access. * **Headless Mode:** Run the Playwright browser in headless mode (which is the default) for better performance. **Example of MCP Integration (Conceptual)** Let's say you have an MCP API endpoint `/api/scrape-jobs` that allows you to create new scraping jobs. The request body might look like this: ```json { "url": "https://www.example.com", "selector": "a", "callbackUrl": "https://your-data-store.com/api/store-data" // Where to send the scraped data } ``` Your server would then: 1. Receive the request to `/api/scrape-jobs`. 2. Validate the request. 3. Create a job in the message queue (e.g., RabbitMQ). 4. A worker process (one of your scraping server instances) would pick up the job from the queue. 5. The worker would scrape the URL, extract the data, and send it to the `callbackUrl`. 6. The worker would update the job status in the MCP (e.g., "in progress", "completed", "failed"). This is a simplified example, but it illustrates the basic principles of integrating Playwright with an SSE server and an MCP. The specific implementation will depend on your requirements and the architecture of your MCP. **Chinese Translation of Key Concepts** * **Web Scraping:** 网络爬虫 (wǎngluò páchóng) * **Server-Sent Events (SSE):** 服务器发送事件 (fúwùqì fāsòng shìjiàn) * **Management Control Plane (MCP):** 管理控制平面 (guǎnlǐ kòngzhì píngmiàn) * **Playwright:** Playwright (no direct translation, use the English name) * **Node.js:** Node.js (no direct translation, use the English name) * **Express.js:** Express.js (no direct translation, use the English name) * **URL:** 网址 (wǎngzhǐ) * **CSS Selector:** CSS 选择器 (CSS xuǎnzéqì) * **Endpoint:** 端点 (duāndiǎn) * **Message Queue:** 消息队列 (xiāoxi duìliè) * **Load Balancer:** 负载均衡器 (fùzài jūnhéngqì) * **Monitoring:** 监控 (jiānkòng) * **Error Handling:** 错误处理 (cuòwù chǔlǐ) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) This comprehensive explanation and code example should give you a solid foundation for building your Playwright-based web scraping solution with SSE and MCP integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!
Needle MCP Server
Here are a few possible translations, depending on the specific context of "Needle" and "modelcontextprotocol": **General/Technical Translation:** * **将 Needle 集成到 modelcontextprotocol 中** (Jiāng Needle jíchéng dào modelcontextprotocol zhōng) - This is a straightforward translation meaning "Integrate Needle into modelcontextprotocol." It's suitable if "Needle" is a specific software component or library. **More Context Needed for a Precise Translation:** To provide a more accurate translation, I need more context about: * **What is "Needle"?** Is it a specific software library, a concept, a tool, or something else? * **What is "modelcontextprotocol"?** Is it a specific protocol, a class, a framework, or a general concept? * **What kind of integration is being discussed?** Is it about using Needle within the context of the modelcontextprotocol, or about modifying the modelcontextprotocol to include Needle's functionality? **Examples with More Context:** * **If "Needle" is a dependency injection framework and "modelcontextprotocol" is a specific protocol for managing model context:** * **使用 Needle 实现 modelcontextprotocol 的依赖注入** (Shǐyòng Needle shíxiàn modelcontextprotocol de làiyòng zhùrù) - "Implement dependency injection for modelcontextprotocol using Needle." * **If "Needle" is a specific data structure and "modelcontextprotocol" defines how models interact with their context:** * **在 modelcontextprotocol 中使用 Needle 数据结构** (Zài modelcontextprotocol zhōng shǐyòng Needle shùjù jiégòu) - "Use the Needle data structure within modelcontextprotocol." **Therefore, please provide more context so I can give you the most accurate and helpful translation.**
teable-mcp-server
一个用于与 Teable 数据库交互的 MCP 服务器。

XRAY MCP
Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.
MCP API Server Template
VNDB MCP Server
一个模型上下文协议(MCP)服务器,用于访问视觉小说数据库(VNDB)API。这使得Claude AI能够搜索和检索视觉小说信息。
MCPAgentAI 🚀
Python SDK,旨在简化与 MCP(模型上下文协议)服务器的交互。 它提供了一个易于使用的界面,用于连接到 MCP 服务器、读取资源和调用工具。
mcpDemo
MCP服务器演示
ynab-mcp-server
镜子 (jìng zi)
🚀 Memgraph MCP Server
Memgraph MCP 服务器 (Memgraph MCP fúwùqì)
mcp_server
LLM 集成用的 MCP 服务器
Remote MCP Server on Cloudflare
MCP Server Markup Language (MCPML)
MCP 服务器标记语言 (MCPML) - 一个 Python 框架,用于构建具有 CLI 和 OpenAI 代理支持的 MCP 服务器。

MCP Server
A Python implementation of the Model Context Protocol (MCP) that connects client applications with AI models, primarily Anthropic's models, with setup instructions for local development and deployment.

Dash MCP Server
Provides tools to interact with the Dash documentation browser API, allowing users to list installed docsets, search across documentation, and enable full-text search.

Weather MCP Server
A Model Context Protocol server that provides weather information and forecasts based on user location or address input.

DDG MCP2
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.

chromium-arm64
MCP server that enables browser automation and web testing on ARM64 devices like Raspberry Pi, allowing users to navigate websites, take screenshots, execute JavaScript, and perform UI testing via Claude.
mcd-demo
正在测试创建简单的 Minecraft (MCP) 服务器并与 LangChain 代理集成。

Tambo Docs MCP Server
Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.
Linear MCP Server
一个处理所有线性资源类型的线性 MCP 实现。