
WebSurfer MCP
A Model Context Protocol server that enables AI assistants to securely fetch and extract readable text content from web pages through a standardized interface.
README
🌐 WebSurfer MCP
A powerful Model Context Protocol (MCP) server that enables Large Language Models (LLMs) to fetch and extract readable text content from web pages. This tool provides a secure, efficient, and feature-rich way for AI assistants to access web content through a standardized interface.
✨ Features
- 🔒 Secure URL Validation: Blocks dangerous schemes, private IPs, and localhost domains
- 📄 Smart Content Extraction: Extracts clean, readable text from HTML pages using advanced parsing
- ⚡ Rate Limiting: Built-in rate limiting to prevent abuse (60 requests/minute)
- 🛡️ Content Type Filtering: Only processes supported content types (HTML, plain text, XML)
- 📏 Size Limits: Configurable content size limits (default: 10MB)
- ⏱️ Timeout Management: Configurable request timeouts with validation
- 🔧 Comprehensive Error Handling: Detailed error messages for various failure scenarios
- 🧪 Full Test Coverage: 45+ unit tests covering all functionality
🏗️ Architecture
The project consists of several key components:
Core Components
MCPURLSearchServer
: Main MCP server implementationTextExtractor
: Handles web content fetching and text extractionURLValidator
: Validates and sanitizes URLs for securityConfig
: Centralized configuration management
Key Features
- Async/Await: Built with modern Python async patterns for high performance
- Resource Management: Proper cleanup of network connections and resources
- Context Managers: Safe resource handling with automatic cleanup
- Logging: Comprehensive logging for debugging and monitoring
🚀 Installation
Prerequisites
- Python 3.12 or higher
- uv package manager (recommended)
Quick Start
-
Clone the repository:
git clone https://github.com/crybo-rybo/websurfer-mcp cd websurfer-mcp
-
Install dependencies:
uv sync
-
Verify installation:
uv run python -c "import mcp_url_search_server; print('Installation successful!')"
🎯 Usage
Starting the MCP Server
The server communicates via stdio (standard input/output) and can be integrated with any MCP-compatible client.
# Start the server
uv run run_server.py serve
# Start with custom log level
uv run run_server.py serve --log-level DEBUG
Testing URL Search Functionality
Test the URL search functionality directly:
# Test with a simple URL
uv run run_server.py test --url "https://example.com"
# Test with custom timeout
uv run run_server.py test --url "https://httpbin.org/html" --timeout 15
Example Test Output
{
"success": true,
"url": "https://example.com",
"title": "Example Domain",
"content_type": "text/html",
"status_code": 200,
"text_length": 1250,
"text_preview": "Example Domain This domain is for use in illustrative examples in documents..."
}
🛠️ Configuration
The server can be configured using environment variables:
Variable | Default | Description |
---|---|---|
MCP_DEFAULT_TIMEOUT |
10 |
Default request timeout in seconds |
MCP_MAX_TIMEOUT |
60 |
Maximum allowed timeout in seconds |
MCP_USER_AGENT |
MCP-URL-Search-Server/1.0.0 |
User agent string for requests |
MCP_MAX_CONTENT_LENGTH |
10485760 |
Maximum content size in bytes (10MB) |
Example Configuration
export MCP_DEFAULT_TIMEOUT=15
export MCP_MAX_CONTENT_LENGTH=5242880 # 5MB
uv run run_server.py serve
🧪 Testing
Running All Tests
# Run all tests with verbose output
uv run python -m unittest discover tests -v
# Run tests with coverage (if coverage is installed)
uv run coverage run -m unittest discover tests
uv run coverage report
Running Specific Test Files
# Run only integration tests
uv run python -m unittest tests.test_integration -v
# Run only text extraction tests
uv run python -m unittest tests.test_text_extractor -v
# Run only URL validation tests
uv run python -m unittest tests.test_url_validator -v
Test Results
All 45 tests should pass successfully:
test_content_types_immutable (test_config.TestConfig.test_content_types_immutable) ... ok
test_default_configuration_values (test_config.TestConfig.test_default_configuration_values) ... ok
test_404_error_handling (test_integration.TestMCPURLSearchIntegration.test_404_error_handling) ... ok
...
----------------------------------------------------------------------
Ran 45 tests in 1.827s
OK
🔧 Development
Project Structure
websurfer-mcp/
├── mcp_url_search_server.py # Main MCP server implementation
├── text_extractor.py # Web content extraction logic
├── url_validator.py # URL validation and security
├── config.py # Configuration management
├── run_server.py # Command-line interface
├── run_tests.py # Test runner utilities
├── tests/ # Test suite
│ ├── test_integration.py # Integration tests
│ ├── test_text_extractor.py # Text extraction tests
│ ├── test_url_validator.py # URL validation tests
│ └── test_config.py # Configuration tests
├── pyproject.toml # Project configuration
└── README.md # This file
🔒 Security Features
URL Validation
- Scheme Blocking: Blocks
file://
,javascript:
,ftp://
schemes - Private IP Protection: Blocks access to private IP ranges (10.x.x.x, 192.168.x.x, etc.)
- Localhost Protection: Blocks localhost and local domain access
- URL Length Limits: Prevents extremely long URLs
- Format Validation: Ensures proper URL structure
Content Safety
- Content Type Filtering: Only processes supported text-based content types
- Size Limits: Configurable maximum content size (default: 10MB)
- Rate Limiting: Prevents abuse with configurable limits
- Timeout Protection: Configurable request timeouts
📊 Performance
- Async Processing: Non-blocking I/O for high concurrency
- Connection Pooling: Efficient HTTP connection reuse
- DNS Caching: Reduces DNS lookup overhead
- Resource Cleanup: Automatic cleanup prevents memory leaks
🙏 Acknowledgments
- Built with the Model Context Protocol (MCP)
- Uses aiohttp for async HTTP requests
- Leverages trafilatura for content extraction
- Powered by BeautifulSoup for HTML parsing
Happy web surfing with your AI assistant! 🚀
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。