Puppeteer Vision MCP Server

Puppeteer Vision MCP Server

Scrapes webpages and converts them to markdown using AI-powered interaction to automatically handle cookie banners, CAPTCHAs, paywalls, and other blocking elements before extracting clean content.

Category
访问服务器

README

MseeP.ai Security Assessment Badge

Puppeteer vision MCP Server

This Model Context Protocol (MCP) server provides a tool for scraping webpages and converting them to markdown format using Puppeteer, Readability, and Turndown. It features AI-driven interaction capabilities to handle cookies, captchas, and other interactive elements automatically.

Now easily runnable via npx!

Features

  • Scrapes webpages using Puppeteer with stealth mode
  • Uses AI-powered interaction to automatically handle:
    • Cookie consent banners
    • CAPTCHAs
    • Newsletter or subscription prompts
    • Paywalls and login walls
    • Age verification prompts
    • Interstitial ads
    • Any other interactive elements blocking content
  • Extracts main content with Mozilla's Readability
  • Converts HTML to well-formatted Markdown
  • Special handling for code blocks, tables, and other structured content
  • Accessible via the Model Context Protocol
  • Option to view browser interaction in real-time by disabling headless mode
  • Easily consumable as an npx package.

Quick Start with NPX

The recommended way to use this server is via npx, which ensures you're running the latest version without needing to clone or manually install.

  1. Prerequisites: Ensure you have Node.js and npm installed.

  2. Environment Setup: The server requires an OPENAI_API_KEY. You can provide this and other optional configurations in two ways:

    • .env file: Create a .env file in the directory where you will run the npx command.
    • Shell Environment Variables: Export the variables in your terminal session.

    Example .env file or shell exports:

    # Required
    OPENAI_API_KEY=your_api_key_here
    
    # Optional (defaults shown)
    # VISION_MODEL=gpt-4.1
    # API_BASE_URL=https://api.openai.com/v1   # Uncomment to override
    # TRANSPORT_TYPE=stdio                     # Options: stdio, sse, http
    # USE_SSE=true                             # Deprecated: use TRANSPORT_TYPE=sse instead
    # PORT=3001                                # Only used in sse/http modes
    # DISABLE_HEADLESS=true                    # Uncomment to see the browser in action
    
  3. Run the Server: Open your terminal and run:

    npx -y puppeteer-vision-mcp-server
    
    • The -y flag automatically confirms any prompts from npx.
    • This command will download (if not already cached) and execute the server.
    • By default, it starts in stdio mode. Set TRANSPORT_TYPE=sse or TRANSPORT_TYPE=http for HTTP server modes.

Using as an MCP Tool with NPX

This server is designed to be integrated as a tool within an MCP-compatible LLM orchestrator. Here's an example configuration snippet:

{
  "mcpServers": {
    "web-scraper": {
      "command": "npx",
      "args": ["-y", "puppeteer-vision-mcp-server"],
      "env": {
        "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE",
        // Optional:
        // "VISION_MODEL": "gpt-4.1",
        // "API_BASE_URL": "https://api.example.com/v1",
        // "TRANSPORT_TYPE": "stdio", // or "sse" or "http"
        // "DISABLE_HEADLESS": "true" // To see the browser during operations
      }
    }
    // ... other MCP servers
  }
}

When configured this way, the MCP orchestrator will manage the lifecycle of the puppeteer-vision-mcp-server process.

Environment Configuration Details

Regardless of how you run the server (NPX or local development), it uses the following environment variables:

  • OPENAI_API_KEY: (Required) Your API key for accessing the vision model.
  • VISION_MODEL: (Optional) The model to use for vision analysis.
    • Default: gpt-4.1
    • Can be any model with vision capabilities.
  • API_BASE_URL: (Optional) Custom API endpoint URL.
    • Use this to connect to alternative OpenAI-compatible providers (e.g., Together.ai, Groq, Anthropic, local deployments).
  • TRANSPORT_TYPE: (Optional) The transport protocol to use.
    • Options: stdio (default), sse, http
    • stdio: Direct process communication (recommended for most use cases)
    • sse: Server-Sent Events over HTTP (legacy mode)
    • http: Streamable HTTP transport with session management
  • USE_SSE: (Optional, deprecated) Set to true to enable SSE mode over HTTP.
    • Deprecated: Use TRANSPORT_TYPE=sse instead.
  • PORT: (Optional) The port for the HTTP server in SSE or HTTP mode.
    • Default: 3001.
  • DISABLE_HEADLESS: (Optional) Set to true to run the browser in visible mode.
    • Default: false (browser runs in headless mode).

Communication Modes

The server supports three communication modes:

  1. stdio (Default): Communicates via standard input/output.
    • Perfect for direct integration with LLM tools that manage processes.
    • Ideal for command-line usage and scripting.
    • No HTTP server is started. This is the default mode.
  2. SSE mode: Communicates via Server-Sent Events over HTTP.
    • Enable by setting TRANSPORT_TYPE=sse in your environment.
    • Starts an HTTP server on the specified PORT (default: 3001).
    • Use when you need to connect to the tool over a network.
    • Connect to: http://localhost:3001/sse
  3. HTTP mode: Communicates via Streamable HTTP transport with session management.
    • Enable by setting TRANSPORT_TYPE=http in your environment.
    • Starts an HTTP server on the specified PORT (default: 3001).
    • Supports full session management and resumable connections.
    • Connect to: http://localhost:3001/mcp

Tool Usage (MCP Invocation)

The server provides a scrape-webpage tool.

Tool Parameters:

  • url (string, required): The URL of the webpage to scrape.
  • autoInteract (boolean, optional, default: true): Whether to automatically handle interactive elements.
  • maxInteractionAttempts (number, optional, default: 3): Maximum number of AI interaction attempts.
  • waitForNetworkIdle (boolean, optional, default: true): Whether to wait for network to be idle before processing.

Response Format:

The tool returns its result in a structured format:

  • content: An array containing a single text object with the raw markdown of the scraped webpage.
  • metadata: Contains additional information:
    • message: Status message.
    • success: Boolean indicating success.
    • contentSize: Size of the content in characters (on success).

Example Success Response:

{
  "content": [
    {
      "type": "text",
      "text": "# Page Title\n\nThis is the content..."
    }
  ],
  "metadata": {
    "message": "Scraping successful",
    "success": true,
    "contentSize": 8734
  }
}

Example Error Response:

{
  "content": [
    {
      "type": "text",
      "text": ""
    }
  ],
  "metadata": {
    "message": "Error scraping webpage: Failed to load the URL",
    "success": false
  }
}

How It Works

AI-Driven Interaction

The system uses vision-capable AI models (configurable via VISION_MODEL and API_BASE_URL) to analyze screenshots of web pages and decide on actions like clicking, typing, or scrolling to bypass overlays and consent forms. This process repeats up to maxInteractionAttempts.

Content Extraction

After interactions, Mozilla's Readability extracts the main content, which is then sanitized and converted to Markdown using Turndown with custom rules for code blocks and tables.

Installation & Development (for Modifying the Code)

If you wish to contribute, modify the server, or run a local development version:

  1. Clone the Repository:

    git clone https://github.com/djannot/puppeteer-vision-mcp.git
    cd puppeteer-vision-mcp
    
  2. Install Dependencies:

    npm install
    
  3. Build the Project:

    npm run build
    
  4. Set Up Environment: Create a .env file in the project's root directory with your OPENAI_API_KEY and any other desired configurations (see "Environment Configuration Details" above).

  5. Run for Development:

    npm start # Starts the server using the local build
    

    Or, for automatic rebuilding on changes:

    npm run dev
    

Customization (for Developers)

You can modify the behavior of the scraper by editing:

  • src/ai/vision-analyzer.ts (analyzePageWithAI function): Customize the AI prompt.
  • src/ai/page-interactions.ts (executeAction function): Add new action types.
  • src/scrapers/webpage-scraper.ts (visitWebPage function): Change Puppeteer options.
  • src/utils/markdown-formatters.ts: Adjust Turndown rules for Markdown conversion.

Dependencies

Key dependencies include:

  • @modelcontextprotocol/sdk
  • puppeteer, puppeteer-extra
  • @mozilla/readability, jsdom
  • turndown, sanitize-html
  • openai (or compatible API for vision models)
  • express (for SSE mode)
  • zod

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选