发现优秀的 MCP 服务器

通过 MCP 服务器扩展您的代理能力,拥有 18,836 个能力。

开发者工具3,065
Deno Deploy MCP Template Repo

Deno Deploy MCP Template Repo

一个用于在 Deno Deploy 上托管 MCP 服务器的模板仓库。

GitHub MCP Server

GitHub MCP Server

JBAssist - Microsoft Graph MCP Server

JBAssist - Microsoft Graph MCP Server

一个兼容 Windows 的 MCP 服务器,用于查询 Microsoft Graph API。

CLI MCP Server

CLI MCP Server

镜子 (jìng zi)

DeepSeek MCP Server

DeepSeek MCP Server

镜子 (jìng zi)

MCP TypeScript Template

MCP TypeScript Template

一个对初学者友好的基础,用于使用 TypeScript 构建模型上下文协议 (MCP) 服务器(以及未来也包括客户端)。此模板提供了一个全面的起点,包含生产就绪的实用程序、结构良好的代码以及用于构建 MCP 服务器的工作示例。

MCP Server Testing Web App

MCP Server Testing Web App

create-mcp-server

create-mcp-server

一个用于创建 MCP 服务器的 MCP 服务器

uv-mcp

uv-mcp

用于内省 Python 环境的 MCP 服务器

mcpDemo

mcpDemo

MCP服务器演示

CodeSynapse

CodeSynapse

一个 MCP(模型上下文协议)服务器,它与语言服务器协议(LSP)集成,以便将代码库中丰富的语义信息暴露给 LLM 代码代理。

mcd-demo

mcd-demo

正在测试创建简单的 Minecraft (MCP) 服务器并与 LangChain 代理集成。

VNDB MCP Server

VNDB MCP Server

一个模型上下文协议(MCP)服务器,用于访问视觉小说数据库(VNDB)API。这使得Claude AI能够搜索和检索视觉小说信息。

🚀 Memgraph MCP Server

🚀 Memgraph MCP Server

Memgraph MCP 服务器 (Memgraph MCP fúwùqì)

MCP API Server Template

MCP API Server Template

mcp_server

mcp_server

LLM 集成用的 MCP 服务器

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML)

MCP 服务器标记语言 (MCPML) - 一个 Python 框架,用于构建具有 CLI 和 OpenAI 代理支持的 MCP 服务器。

Linear MCP Server

Linear MCP Server

一个处理所有线性资源类型的线性 MCP 实现。

UIThub MCP Server

UIThub MCP Server

简单的 GitHub MCP 服务器。

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests to start the scraping process and stream the results back to the client. 3. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs from the server to the client in real-time as they are found. This is more efficient than repeatedly polling the server. 4. **MCP (Management Control Plane):** The MCP part is a bit more abstract. It implies you have a system for managing and controlling the scraping process. This could involve: * Configuration: Specifying the target URL, selectors for extracting URLs, etc. * Monitoring: Tracking the progress of the scraping job. * Error Handling: Managing errors that occur during scraping. * Scaling: Distributing the scraping workload across multiple instances. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // In-memory storage for SSE connections (replace with a more robust solution for production) const sseClients = []; app.get('/sse', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const clientId = Date.now(); // Unique ID for the client const newClient = { id: clientId, res, }; sseClients.push(newClient); console.log(`${clientId} Connection open`); req.on('close', () => { console.log(`${clientId} Connection closed`); sseClients = sseClients.filter((client) => client.id !== clientId); }); }); function sendSSE(data) { sseClients.forEach((client) => { client.res.write(`data: ${JSON.stringify(data)}\n\n`); }); } app.post('/scrape', async (req, res) => { const { url, selector } = req.body; // Get URL and selector from request body if (!url || !selector) { return res.status(400).send('URL and selector are required.'); } console.log(`Starting scrape for ${url} with selector ${selector}`); try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.locator(selector).evaluateAll(links => links.map(link => link.href)); for (const link of links) { sendSSE({ url: link }); // Send each URL via SSE } await browser.close(); console.log(`Scrape complete for ${url}`); res.status(200).send('Scrape started and URLs are being streamed.'); } catch (error) { console.error('Scrape error:', error); sendSSE({ error: error.message }); // Send error via SSE res.status(500).send('Scrape failed.'); } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple Client to Receive SSE)** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/sse'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); } else if (data.error) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.error}`; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); }; </script> </body> </html> ``` **4. Running the Example** 1. **Start the Server:** `node server.js` 2. **Open `client.html`** in your browser. 3. **Send a POST request to `/scrape`:** You can use `curl`, `Postman`, or a similar tool. For example: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "selector": "a"}' http://localhost:3000/scrape ``` Replace `"https://www.example.com"` with the URL you want to scrape and `"a"` with the CSS selector for the links you want to extract. **Explanation** * **`server.js`:** * Sets up an Express.js server. * `/sse` endpoint: Handles SSE connections. It sets the correct headers for SSE and keeps track of connected clients. * `sendSSE(data)`: Sends data to all connected SSE clients. * `/scrape` endpoint: * Receives the target URL and CSS selector from the request body. * Launches a Playwright browser. * Navigates to the URL. * Uses `page.locator(selector).evaluateAll()` to extract the `href` attributes of all elements matching the selector. * Sends each URL back to the client via SSE. * Handles errors and sends error messages via SSE. * **`client.html`:** * Creates an `EventSource` object to connect to the `/sse` endpoint. * Listens for `message` events from the server. * Parses the JSON data and displays the URLs in a list. * Handles errors. **Important Considerations for Production (MCP)** * **Configuration Management:** Instead of hardcoding the URL and selector in the `curl` command, you'd typically store them in a database or configuration file. Your MCP would provide an interface for managing these configurations. * **Job Queuing:** Use a message queue (e.g., RabbitMQ, Kafka) to queue scraping jobs. This allows you to handle a large number of requests without overloading the server. * **Scaling:** Run multiple instances of the scraping server behind a load balancer. The message queue will distribute the jobs across the instances. * **Monitoring:** Use a monitoring system (e.g., Prometheus, Grafana) to track the performance of the scraping servers, the number of jobs in the queue, and any errors that occur. * **Error Handling:** Implement robust error handling and retry mechanisms. For example, if a scraping job fails, you might retry it a few times before giving up. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or other data store. * **Authentication/Authorization:** Secure your MCP endpoints with authentication and authorization to prevent unauthorized access. * **Headless Mode:** Run the Playwright browser in headless mode (which is the default) for better performance. **Example of MCP Integration (Conceptual)** Let's say you have an MCP API endpoint `/api/scrape-jobs` that allows you to create new scraping jobs. The request body might look like this: ```json { "url": "https://www.example.com", "selector": "a", "callbackUrl": "https://your-data-store.com/api/store-data" // Where to send the scraped data } ``` Your server would then: 1. Receive the request to `/api/scrape-jobs`. 2. Validate the request. 3. Create a job in the message queue (e.g., RabbitMQ). 4. A worker process (one of your scraping server instances) would pick up the job from the queue. 5. The worker would scrape the URL, extract the data, and send it to the `callbackUrl`. 6. The worker would update the job status in the MCP (e.g., "in progress", "completed", "failed"). This is a simplified example, but it illustrates the basic principles of integrating Playwright with an SSE server and an MCP. The specific implementation will depend on your requirements and the architecture of your MCP. **Chinese Translation of Key Concepts** * **Web Scraping:** 网络爬虫 (wǎngluò páchóng) * **Server-Sent Events (SSE):** 服务器发送事件 (fúwùqì fāsòng shìjiàn) * **Management Control Plane (MCP):** 管理控制平面 (guǎnlǐ kòngzhì píngmiàn) * **Playwright:** Playwright (no direct translation, use the English name) * **Node.js:** Node.js (no direct translation, use the English name) * **Express.js:** Express.js (no direct translation, use the English name) * **URL:** 网址 (wǎngzhǐ) * **CSS Selector:** CSS 选择器 (CSS xuǎnzéqì) * **Endpoint:** 端点 (duāndiǎn) * **Message Queue:** 消息队列 (xiāoxi duìliè) * **Load Balancer:** 负载均衡器 (fùzài jūnhéngqì) * **Monitoring:** 监控 (jiānkòng) * **Error Handling:** 错误处理 (cuòwù chǔlǐ) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) This comprehensive explanation and code example should give you a solid foundation for building your Playwright-based web scraping solution with SSE and MCP integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!

teable-mcp-server

teable-mcp-server

一个用于与 Teable 数据库交互的 MCP 服务器。

MCPAgentAI 🚀

MCPAgentAI 🚀

Python SDK,旨在简化与 MCP(模型上下文协议)服务器的交互。 它提供了一个易于使用的界面,用于连接到 MCP 服务器、读取资源和调用工具。

CTF-MCP-Server

CTF-MCP-Server

jira-mcp-server

jira-mcp-server

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

github-mcp-cursor-project-rules

github-mcp-cursor-project-rules

Okay, here's a translation of "Cursor project rules and MCP server" into Chinese, along with some context to make sure it's the most accurate translation: **Possible Translations:** * **精确翻译 (Literal):** 光标项目规则和 MCP 服务器 (Guāngbiāo xiàngmù guīzé hé MCP fúwùqì) * **更自然的翻译 (More Natural):** 光标项目的规则与 MCP 服务器 (Guāngbiāo xiàngmù de guīzé yǔ MCP fúwùqì) **Explanation of the Translation Choices:** * **光标 (Guāngbiāo):** Cursor (as in the mouse cursor or a text cursor) * **项目 (Xiàngmù):** Project * **规则 (Guīzé):** Rules * **和 (Hé) / 与 (Yǔ):** And (both are acceptable, 与 is slightly more formal) * **MCP 服务器 (MCP Fúwùqì):** MCP Server (MCP is usually kept as is, as it's an acronym. 服务器 means "server.") * **的 (De):** A possessive particle, used to connect "光标项目" (Cursor project) and "规则" (rules) to make it "Cursor project's rules". **Which Translation to Use:** The more natural translation, **光标项目的规则与 MCP 服务器 (Guāngbiāo xiàngmù de guīzé yǔ MCP fúwùqì)**, is generally preferred because it flows better in Chinese. **Important Considerations (Context is Key):** To give you the *best* translation, I need a little more context. Here are some questions to consider: * **What kind of "Cursor project" is this?** Is it a software project involving cursors? Is it a project *named* "Cursor"? Knowing this will help me choose the most appropriate wording. * **What is the MCP server used for?** Knowing the purpose of the server will help ensure the translation is accurate. **Example with More Context:** Let's say the "Cursor project" is a software development project focused on improving cursor behavior, and the MCP server is used for managing the project's resources. Then, a possible translation could be: * **针对光标行为改进的软件项目规则与 MCP 服务器** (Zhēnduì guāngbiāo xíngwéi gǎijìn de ruǎnjiàn xiàngmù guīzé yǔ MCP fúwùqì) * This translates to: "Rules for the software project aimed at improving cursor behavior and the MCP server." Therefore, please provide more context if you want a more precise and accurate translation.

Mcp Server Code Analyzer

Mcp Server Code Analyzer

Database Schema MCP Server

Database Schema MCP Server

GitLab MCP Server

GitLab MCP Server

🚀 Go-Tapd-SDK

🚀 Go-Tapd-SDK

Go Tapd SDK 是一个用于访问 Tapd API 的 Go 客户端库,并且它也支持最新的 MCP 服务器。