Site Crawler MCP

Site Crawler MCP

An MCP server for crawling websites and extracting comprehensive data including images, SEO metadata, security headers, and business intelligence. It features twelve distinct extraction modes to perform detailed audits for e-commerce sites and general web analysis.

Category
访问服务器

README

Site Crawler MCP

A powerful Model Context Protocol (MCP) server for crawling websites and extracting assets including images and SEO metadata. Built for e-commerce sites and general web crawling needs.

Features

  • Comprehensive website analysis: 12 different extraction modes for complete website insights
  • Multi-mode crawling: Extract multiple data types in a single pass
  • Smart extraction: Advanced pattern matching for accurate data extraction
  • Performance optimized: Concurrent crawling with rate limiting
  • Security analysis: HTTPS, security headers, SSL/TLS information
  • SEO analysis: Complete SEO audit including meta tags, structured data, and more
  • Legal compliance: KVKK, GDPR, privacy policy detection
  • Business intelligence: Brand info, references, contact details extraction

Installation

From PyPI (when published)

pip install site-crawler-mcp

From Source (Development)

Using uv (Recommended)

# Clone the repository
git clone https://github.com/AndacGuven/site-crawler-mcp.git
cd site-crawler-mcp

# Create virtual environment with Python 3.12
uv venv --python 3.12
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies and package
uv sync

Using pip

# Clone the repository
git clone https://github.com/AndacGuven/site-crawler-mcp.git
cd site-crawler-mcp

# Create virtual environment (recommended)
python -m venv venv

# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On Linux/Mac:
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Install package in development mode
pip install -e .

Usage

As an MCP Server

Add to your MCP configuration file:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Using uvx (Recommended)

{
  "mcpServers": {
    "site-crawler": {
      "command": "uvx",
      "args": ["--from", "/path/to/site-crawler-mcp", "site-crawler-mcp"]
    }
  }
}

Using uv run

{
  "mcpServers": {
    "site-crawler": {
      "command": "uv",
      "args": ["run", "site_crawler"],
      "cwd": "/path/to/site-crawler-mcp"
    }
  }
}

Using python directly

{
  "mcpServers": {
    "site-crawler": {
      "command": "python",
      "args": ["-m", "site_crawler.server"],
      "cwd": "/path/to/site-crawler-mcp/src",
      "env": {
        "PYTHONPATH": "/path/to/site-crawler-mcp/src"
      }
    }
  }
}

Note: Replace /path/to/site-crawler-mcp with your actual project path. On Windows, use backslashes and drive letters (e.g., C:\\Users\\YourName\\site-crawler-mcp).

Available Tools

site_crawlAssets

Crawl a website and extract various assets based on specified modes.

Parameters:

  • url (string, required): The URL to start crawling from
  • modes (array, required): Array of extraction modes (see below)
  • depth (number, optional): Crawling depth (default: 1)
  • max_pages (number, optional): Maximum pages to crawl (default: 50)

Available Modes:

  • images: Extract all images with metadata (alt text, dimensions, format)
  • meta: Basic SEO metadata (title, description, H1 tags)
  • brand: Company branding information (logo, name, about pages)
  • seo: Comprehensive SEO analysis (meta tags, structured data, open graph)
  • performance: Page load metrics and performance indicators
  • security: Security headers and HTTPS configuration
  • compliance: Accessibility and regulatory compliance checks
  • infrastructure: Server technology and CDN detection
  • legal: Privacy policies, terms, KVKK compliance
  • careers: Job opportunities and career pages
  • references: Client testimonials and case studies
  • contact: Contact information (email, phone, social media, address)

Example Requests:

  1. Basic image extraction:
{
  "tool": "site_crawlAssets",
  "arguments": {
    "url": "https://example.com",
    "modes": ["images"],
    "depth": 1
  }
}
  1. Full SEO and security audit:
{
  "tool": "site_crawlAssets",
  "arguments": {
    "url": "https://example.com",
    "modes": ["seo", "security", "performance"],
    "depth": 2
  }
}
  1. Business intelligence gathering:
{
  "tool": "site_crawlAssets",
  "arguments": {
    "url": "https://example.com",
    "modes": ["brand", "contact", "references", "careers"],
    "depth": 3
  }
}
  1. Legal compliance check:
{
  "tool": "site_crawlAssets",
  "arguments": {
    "url": "https://example.com",
    "modes": ["legal", "compliance"],
    "depth": 2
  }
}

Development

Requirements

  • Python 3.10+
  • BeautifulSoup4
  • aiohttp
  • MCP SDK
  • uv (recommended for development)

Setup Development Environment

Using uv (Recommended)

# Clone the repository
git clone https://github.com/AndacGuven/site-crawler-mcp.git
cd site-crawler-mcp

# Create virtual environment with Python 3.12
uv venv --python 3.12
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies and package
uv sync

Using pip

# Clone the repository
git clone https://github.com/AndacGuven/site-crawler-mcp.git
cd site-crawler-mcp

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .

Running the Server

Using uv

# Run the MCP server
uv run site_crawler
# or
uv run site-crawler-mcp
# or
uv run python -m site_crawler.server

Using python directly

python -m site_crawler.server

Running Tests

# Using uv
uv run pytest tests/

# Using pip
pytest tests/

Project Structure

site-crawler-mcp/
├── README.md
├── requirements.txt
├── pyproject.toml
├── src/
│   └── site_crawler/
│       ├── __init__.py
│       ├── server.py
│       ├── crawler.py
│       └── utils.py
└── tests/
    ├── __init__.py
    └── test_crawler.py

Configuration

Environment Variables

  • CRAWLER_MAX_CONCURRENT: Maximum concurrent requests (default: 5)
  • CRAWLER_TIMEOUT: Request timeout in seconds (default: 30)
  • CRAWLER_USER_AGENT: Custom user agent string

Rate Limiting

The crawler respects robots.txt and implements polite crawling:

  • 1-2 second delay between requests to the same domain
  • Maximum 5 concurrent requests
  • Automatic retry with exponential backoff

Use Cases

E-commerce Analysis

Extract product images, pricing, and brand information:

"Analyze the e-commerce site example.com for product images, brand info, and contact details"

SEO and Performance Audit

Comprehensive SEO and performance analysis:

"Perform a full SEO audit of example.com including performance metrics and structured data"

Security Assessment

Check security headers and HTTPS configuration:

"Analyze the security posture of example.com including headers and SSL configuration"

Legal Compliance Check

Verify KVKK/GDPR compliance and privacy policies:

"Check example.com for KVKK compliance, privacy policies, and data protection measures"

Business Intelligence

Gather company information and references:

"Extract business information from example.com including company details, references, and career opportunities"

Contact Information Extraction

Find all contact details:

"Find all contact information on example.com including emails, phones, social media, and addresses"

Performance Considerations

  • Images smaller than 50KB are filtered out by default
  • Concurrent crawling limited to 5 pages simultaneously
  • Memory-efficient streaming for large sites
  • Automatic deduplication of URLs

Error Handling

The crawler handles various error scenarios gracefully:

  • Network timeouts
  • Invalid URLs
  • Rate limiting (429 responses)
  • JavaScript-heavy sites (graceful degradation)
  • Memory limits

Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with MCP SDK
  • Inspired by the need for better e-commerce crawling tools
  • Thanks to the open-source community

Support

For issues and feature requests, please use the GitHub issue tracker.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选