AnyCrawl MCP Server

AnyCrawl MCP Server

Enables web scraping and crawling capabilities for LLM clients, supporting single-page scraping, multi-page website crawling, and web search with multiple engines (Playwright, Cheerio, Puppeteer) and flexible output formats including markdown, HTML, text, and screenshots.

Category
访问服务器

README

AnyCrawl MCP Server

🚀 AnyCrawl MCP Server — Powerful web scraping and crawling for Cursor, Claude, and other LLM clients via the Model Context Protocol (MCP).

Features

  • Web Scraping: Extract content from single URLs with multiple output formats
  • Website Crawling: Crawl entire websites with configurable depth and limits
  • Search Engine Integration: Search the web and optionally scrape results
  • Multiple Engines: Support for Playwright, Cheerio, and Puppeteer
  • Flexible Output: Markdown, HTML, text, screenshots, and structured JSON
  • Async Operations: Non-blocking crawl jobs with status monitoring
  • Error Handling: Robust error handling and logging
  • Multiple Modes: STDIO (default), MCP(HTTP), SSE; cloud-ready with Nginx proxy

Installation

Running with npx

ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

Manual installation

npm install -g anycrawl-mcp-server

ANYCRAWL_API_KEY=YOUR-API-KEY anycrawl-mcp

Configuration

Set the required environment variable:

export ANYCRAWL_API_KEY="your-api-key-here"

Optionally set a custom base URL:

export ANYCRAWL_BASE_URL="https://api.anycrawl.dev"  # Default

Get your API key

  • Visit the AnyCrawl website and sign up or log in: AnyCrawl
  • 🎉 Sign up for free to receive 1,500 credits — enough to crawl nearly 1,500 pages.
  • Open the dashboard → API Keys → Copy your key.
  • Copy the key and set it as the ANYCRAWL_API_KEY environment variable (see above).

Usage

Available Modes

AnyCrawl MCP Server supports the following deployment modes:

Default mode is STDIO (no env needed). Set ANYCRAWL_MODE to switch.

Mode Description Best For Transport
STDIO Standard MCP over stdio (default) Command-type MCP clients, local tooling stdio
MCP Streamable HTTP (JSON, stateful) Cursor (streamable_http), API integration HTTP + JSON
SSE Server-Sent Events Web apps, browser integrations HTTP + SSE

Quick Start Commands

# Development (local)
npm run dev            # STDIO (default)
npm run dev:mcp          # MCP mode (JSON /mcp)
npm run dev:sse          # SSE mode (/sse)

# Production (built output)
npm start              # STDIO (default)
npm run start:mcp
npm run start:sse

# Env examples
ANYCRAWL_MODE=MCP ANYCRAWL_API_KEY=YOUR-KEY npm run dev:mcp
ANYCRAWL_MODE=SSE ANYCRAWL_API_KEY=YOUR-KEY npm run dev:sse

Docker Compose (MCP + SSE with Nginx)

This repo ships a production-ready image that runs MCP (JSON) on port 3000 and SSE on port 3001 in the same container, fronted by Nginx. Nginx also supports API-key-prefixed paths /{API_KEY}/mcp and /{API_KEY}/sse and forwards the key via x-anycrawl-api-key header.

docker compose build
docker compose up -d

Environment variables used in Docker image:

  • ANYCRAWL_MODE: MCP_AND_SSE (default in compose), or MCP, SSE
  • ANYCRAWL_MCP_PORT: default 3000
  • ANYCRAWL_SSE_PORT: default 3001
  • CLOUD_SERVICE: true to extract API key from /{API_KEY}/... or headers
  • ANYCRAWL_BASE_URL: default https://api.anycrawl.dev

Running on Cursor

Configuring Cursor. Note: Requires Cursor v0.45.6+.

For Cursor v0.48.6 and newer, add this to your MCP Servers settings:

{
  "mcpServers": {
    "anycrawl-mcp": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR-API-KEY"
      }
    }
  }
}

For Cursor v0.45.6:

  1. Open Cursor Settings → Features → MCP Servers → "+ Add New MCP Server"
  2. Name: "anycrawl-mcp" (or your preferred name)
  3. Type: "command"
  4. Command:
env ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

On Windows, if you encounter issues:

cmd /c "set ANYCRAWL_API_KEY=YOUR-API-KEY && npx -y anycrawl-mcp"

Running on VS Code

For manual installation, add this JSON to your User Settings (JSON) in VS Code (Command Palette → Preferences: Open User Settings (JSON)):

{
  "mcp": {
    "inputs": [
      {
        "type": "promptString",
        "id": "apiKey",
        "description": "AnyCrawl API Key",
        "password": true
      }
    ],
    "servers": {
      "anycrawl": {
        "command": "npx",
        "args": ["-y", "anycrawl-mcp"],
        "env": {
          "ANYCRAWL_API_KEY": "${input:apiKey}"
        }
      }
    }
  }
}

Optionally, place the following in .vscode/mcp.json in your workspace to share config:

{
  "inputs": [
    {
      "type": "promptString",
      "id": "apiKey",
      "description": "AnyCrawl API Key",
      "password": true
    }
  ],
  "servers": {
    "anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "${input:apiKey}"
      }
    }
  }
}

Running on Windsurf

Add this to ./codeium/windsurf/model_config.json:

{
  "mcpServers": {
    "mcp-server-anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running with SSE Server Mode

The SSE (Server-Sent Events) mode provides a web-based interface for MCP communication, ideal for web applications, testing, and integration with web-based LLM clients.

Quick Start

# Development mode
ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

# Or using npm scripts
ANYCRAWL_API_KEY=YOUR-API-KEY npm run dev:sse

Server Configuration

Optional server settings (defaults shown):

export ANYCRAWL_PORT=3000
export ANYCRAWL_HOST=0.0.0.0

Health Check

curl -s http://localhost:${ANYCRAWL_PORT:-3000}/health
# Response: ok

Generic MCP/SSE Client Configuration

For other MCP/SSE clients that support SSE transport, use this configuration:

{
  "mcpServers": {
    "anycrawl": {
      "type": "sse",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/sse",
      "name": "AnyCrawl MCP Server",
      "description": "Web scraping and crawling tools"
    }
  }
}

or

{
  "mcpServers": {
    "AnyCrawl": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Environment Setup:

# Start SSE server with API key
ANYCRAWL_API_KEY=your-api-key-here npm run dev:sse

Cursor configuration for HTTP modes (streamable_http)

Configure Cursor to connect to your HTTP MCP server.

Local HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-local": {
      "type": "streamable_http",
      "url": "http://127.0.0.1:3000/mcp"
    }
  }
}

Cloud HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-cloud": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Note: For HTTP modes, set ANYCRAWL_API_KEY (and optional host/port) in the server process environment or in the URL. Cursor does not need your API key when using streamable_http.

Available Tools

1. Scrape Tool (anycrawl_scrape)

Scrape a single URL and extract content in various formats.

Best for:

  • Extracting content from a single page
  • Quick data extraction
  • Testing specific URLs

Parameters:

  • url (required): The URL to scrape
  • engine (required): Scraping engine (playwright, cheerio, puppeteer)
  • formats (optional): Output formats (markdown, html, text, screenshot, screenshot@fullPage, rawHtml, json)
  • proxy (optional): Proxy URL
  • timeout (optional): Timeout in milliseconds (default: 300000)
  • retry (optional): Whether to retry on failure (default: false)
  • wait_for (optional): Wait time for page to load
  • include_tags (optional): HTML tags to include
  • exclude_tags (optional): HTML tags to exclude
  • json_options (optional): Options for JSON extraction

Example:

{
  "name": "anycrawl_scrape",
  "arguments": {
    "url": "https://example.com",
    "engine": "cheerio",
    "formats": ["markdown", "html"],
    "timeout": 30000
  }
}

2. Crawl Tool (anycrawl_crawl)

Start a crawl job to scrape multiple pages from a website. By default this waits for completion and returns aggregated results using the SDK's client.crawl (defaults: poll every 3 seconds, timeout after 60 seconds).

Best for:

  • Extracting content from multiple related pages
  • Comprehensive website analysis
  • Bulk data collection

Parameters:

  • url (required): The base URL to crawl
  • engine (required): Scraping engine
  • max_depth (optional): Maximum crawl depth (default: 10)
  • limit (optional): Maximum number of pages (default: 100)
  • strategy (optional): Crawling strategy (all, same-domain, same-hostname, same-origin)
  • exclude_paths (optional): URL patterns to exclude
  • include_paths (optional): URL patterns to include
  • scrape_options (optional): Options for individual page scraping
  • poll_seconds (optional): Poll interval seconds for waiting (default: 3)
  • timeout_ms (optional): Overall timeout milliseconds for waiting (default: 60000)

Example:

{
  "name": "anycrawl_crawl",
  "arguments": {
    "url": "https://example.com/blog",
    "engine": "playwright",
    "max_depth": 2,
    "limit": 50,
    "strategy": "same-domain",
    "poll_seconds": 3,
    "timeout_ms": 60000
  }
}

Returns: { "job_id": "...", "status": "completed", "total": N, "completed": N, "creditsUsed": N, "data": [...] }.

3. Crawl Status Tool (anycrawl_crawl_status)

Check the status of a crawl job.

Parameters:

  • job_id (required): The crawl job ID

Example:

{
  "name": "anycrawl_crawl_status",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396"
  }
}

4. Crawl Results Tool (anycrawl_crawl_results)

Get results from a crawl job.

Parameters:

  • job_id (required): The crawl job ID
  • skip (optional): Number of results to skip (for pagination)

Example:

{
  "name": "anycrawl_crawl_results",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396",
    "skip": 0
  }
}

5. Cancel Crawl Tool (anycrawl_cancel_crawl)

Cancel a pending crawl job.

Parameters:

  • job_id (required): The crawl job ID to cancel

Example:

{
  "name": "anycrawl_cancel_crawl",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396"
  }
}

6. Search Tool (anycrawl_search)

Search the web using AnyCrawl search engine.

Best for:

  • Finding specific information across multiple websites
  • Research and discovery
  • When you don't know which website has the information

Parameters:

  • query (required): Search query
  • engine (optional): Search engine (google)
  • limit (optional): Maximum number of results (default: 10)
  • offset (optional): Number of results to skip (default: 0)
  • pages (optional): Number of pages to search
  • lang (optional): Language code
  • country (optional): Country code
  • scrape_options (required): Options for scraping search results
  • safeSearch (optional): Safe search level (0=off, 1=moderate, 2=strict)

Example:

{
  "name": "anycrawl_search",
  "arguments": {
    "query": "latest AI research papers 2024",
    "engine": "google",
    "limit": 5,
    "scrape_options": {
      "engine": "cheerio",
      "formats": ["markdown"]
    }
  }
}

Output Formats

Markdown

Clean, structured markdown content perfect for LLM consumption.

HTML

Raw HTML content with all formatting preserved.

Text

Plain text content with minimal formatting.

Screenshot

Visual screenshot of the page.

Screenshot@fullPage

Full-page screenshot including content below the fold.

Raw HTML

Unprocessed HTML content.

JSON

Structured data extraction using custom schemas.

Engines

Cheerio

  • Fast and lightweight
  • Good for static content
  • Server-side rendering

Playwright

  • Full browser automation
  • JavaScript rendering
  • Best for dynamic content

Puppeteer

  • Chrome/Chromium automation
  • Good balance of features and performance

Error Handling

The server provides comprehensive error handling:

  • Validation Errors: Invalid parameters or missing required fields
  • API Errors: AnyCrawl API errors with detailed messages
  • Network Errors: Connection and timeout issues
  • Rate Limiting: Automatic retry with backoff

Logging

The server includes detailed logging:

  • Debug: Detailed operation information
  • Info: General operation status
  • Warn: Non-critical issues
  • Error: Critical errors and failures

Set log level with environment variable:

export LOG_LEVEL=debug  # debug, info, warn, error

Development

Prerequisites

  • Node.js 18+
  • npm

Setup

git clone <repository>
cd anycrawl-mcp
npm ci

Build

npm run build

Test

npm test

Lint

npm run lint

Format

npm run format

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details

Support

About AnyCrawl

AnyCrawl is a powerful Node.js/TypeScript crawler that turns websites into LLM-ready data and extracts structured SERP results from Google/Bing/Baidu/etc. It features native multi-threading for bulk processing and supports multiple output formats.

  • Website: https://anycrawl.dev
  • GitHub: https://github.com/any4ai/anycrawl
  • API: https://api.anycrawl.dev

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选