Enhanced Fetch MCP

Enhanced Fetch MCP

Provides advanced web scraping with HTTP client, smart content extraction to Markdown, browser automation via Playwright, screenshot/PDF generation, and Docker sandbox execution environments.

Category
访问服务器

README

Enhanced Fetch MCP

English | 简体中文

smithery badge

An AI-native web interaction layer that elevates Playwright into an intelligent, secure, and efficient service for AI agents.

Feature Claude's Native Fetch Standard Playwright Enhanced Fetch MCP
Content Extraction Basic Manual Parsing Advanced: Extracts main content, metadata, links, and images into clean Markdown.
JavaScript Rendering ❌ No ✅ Yes ✅ Yes: Full browser rendering for dynamic pages.
Security ✅ Safe ⚠️ Browser Sandbox 🔒 Maximum Security: All operations are isolated in a Docker container.
Resource Efficiency High Low Hybrid Engine: Intelligently switches between lightweight HTTP and a full browser.
Screenshots & PDFs ❌ No ✅ Yes ✅ Yes: Captures screenshots and generates PDFs.
Control & Customization Limited High Full Control: Customizable headers, timeouts, and more.

Quick Start

Option 1: Local Installation

1. Install:

npm install -g enhanced-fetch-mcp

2. Configure Claude Code (~/.config/claude/config.json):

{
  "mcpServers": { "enhanced-fetch": { "command": "enhanced-fetch-mcp" } }
}

3. Use:

"Fetch the main content of https://example.com and take a screenshot."

Option 2: Install via Smithery

Alternatively, you can use Smithery to simplify the installation and configuration process.

Install via Smithery: https://smithery.ai/server/@Danielmelody/enhanced-fetch-mcp

Smithery automates the installation, dependency management, and MCP configuration for you. Important: The server runs entirely on your local machine—no data is sent to external servers. You maintain full control over your browsing activities, screenshots, and extracted content, ensuring maximum privacy and security.


Tools

Web Scraping Tools (3)

Tool Description
fetch_url Makes a direct HTTP request to a URL to retrieve its raw HTML content. It supports various methods (GET, POST, etc.), custom headers, and other advanced options.
extract_content Parses raw HTML to pull out structured information. It identifies the main article, cleans it, and returns it in multiple formats (text, Markdown, and HTML), along with metadata, links, and images.
fetch_and_extract A convenient, all-in-one tool that first fetches a URL and then automatically extracts its content. It intelligently decides whether to use a simple HTTP request or a full browser.

Browser Automation Tools (8)

Tool Description
create_browser_context Initializes and launches a new, isolated browser instance (like Chrome, Firefox, or WebKit) with a clean session, returning a unique ID for future operations.
browser_navigate Instructs a specific browser instance to visit a URL and waits for the page to load, including executing any initial JavaScript.
browser_get_content Retrieves the full HTML of a page after it has been rendered by the browser, ensuring all dynamic content is present.
browser_screenshot Captures a visual snapshot of the current page in the browser, which can be a full-page or a specific region, and returns the image data.
browser_pdf Generates a PDF document from the current page's content, allowing for a printable, offline version of the web page.
browser_execute_js Runs a custom JavaScript snippet within the context of the current page, enabling interaction with page elements or data retrieval.
list_browser_contexts Returns a list of all currently active browser instances that have been created, along with their IDs and status.
close_browser_context Terminates a browser instance and cleans up all associated resources, including closing all its pages and freeing up memory.

Docker Sandbox Tools (8)

Tool Description
create_sandbox Provisions a new, secure Docker container with a specified image, providing an isolated environment for command execution.
execute_in_sandbox Runs a shell command inside a designated Docker sandbox and returns its standard output, error, and exit code.
list_sandboxes Provides a list of all currently running Docker sandboxes, including their IDs, names, and current status.
get_sandbox Retrieves detailed information about a specific sandbox, such as its configuration, running state, and network settings.
pause_sandbox Temporarily freezes a running sandbox, preserving its state while consuming minimal CPU resources.
resume_sandbox Unpauses a previously paused sandbox, allowing it to continue execution from where it left off.
cleanup_sandbox Stops and completely removes a sandbox container, deleting its file system and freeing up all associated system resources.
get_sandbox_stats Fetches real-time resource usage metrics for a sandbox, including CPU, memory, and network I/O.

Usage Examples

Simple Web Scraping

User: Fetch content from https://example.com

Claude automatically calls fetch_and_extract:
→ Fetch HTML
→ Extract title, description, body
→ Convert to Markdown
→ Return structured content

JavaScript-Rendered Pages

User: This page requires browser rendering

Claude automatically uses browser tools:
→ Create browser context
→ Navigate to page
→ Wait for JavaScript execution
→ Get fully rendered content

Web Screenshots

User: Take a screenshot of this page

Claude automatically calls browser screenshot:
→ Open page
→ Wait for loading completion
→ Capture full-page screenshot
→ Return PNG image

System Requirements

Required

  • Node.js >= 18.0.0
  • npm >= 8.0.0

Optional (for specific features)

  • Docker (for sandbox functionality)
  • Sufficient disk space (Playwright browsers ~300MB)

Verify Installation

# Check if command is available
enhanced-fetch-mcp --version
# Output: v1.0.0

# View help
enhanced-fetch-mcp --help

# Test run (Ctrl+C to exit)
enhanced-fetch-mcp
# Output: Enhanced Fetch MCP Server running on stdio

Troubleshooting

Command Not Found

# Check installation
npm list -g enhanced-fetch-mcp

# Reinstall
npm install -g enhanced-fetch-mcp

# Check path
which enhanced-fetch-mcp

Docker Not Running (affects sandbox functionality)

# macOS
open -a Docker

# Linux
sudo systemctl start docker

# Verify
docker ps

View Logs

# Server logs
tail -f ~/.local/share/enhanced-fetch-mcp/logs/browser-mcp.log

# Error logs
tail -f ~/.local/share/enhanced-fetch-mcp/logs/browser-mcp-error.log

Update

npm update -g enhanced-fetch-mcp

Development

Install from Source

# Clone project
git clone https://github.com/yourusername/enhanced-fetch-mcp.git
cd enhanced-fetch-mcp

# Install dependencies
npm install

# Build
npm run build

# Global link (development mode)
npm link

# Run tests
npm test

# Development mode (watch for changes)
npm run dev

Performance Metrics

Operation Average Time
HTTP Request 200-300ms
Content Extraction 10-50ms
Browser Launch 300-500ms
Page Navigation 1.5-2s
Screenshot ~50ms
JavaScript Execution <10ms

Contributing

Contributions are welcome! Please submit a Pull Request or create an Issue.

License

MIT License

Acknowledgments

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选