SpiderFoot MCP Server
Enables interaction with SpiderFoot's OSINT scanning capabilities through Claude and other MCP-compatible tools. Supports comprehensive scan management, real-time monitoring, result retrieval, and export functionality for reconnaissance and investigation workflows.
README
SpiderFoot MCP Server
A Model Context Protocol (MCP) server that provides SpiderFoot scanning capabilities through a standardized interface.
Overview
This MCP server allows you to interact with SpiderFoot's OSINT scanning capabilities through Claude and other MCP-compatible tools. It provides comprehensive scan management, result retrieval, and export functionality.
Features
- Scan Management: Start, stop, delete, and monitor SpiderFoot scans
- Real-time Status: Get scan status, progress, and completion notifications
- Result Access: Retrieve scan results, summaries, and logs
- Export Capabilities: Export scan data in JSON, CSV, and Excel formats
- Search Functionality: Search across scan results
- Server Health: Ping server and check connectivity/version
- Module Management: Access available SpiderFoot modules
Prerequisites
- Python 3.8+
- Access to a SpiderFoot server (local or remote)
- Required Python packages (see Installation)
Installation
- Install required dependencies:
pip install requests python-dotenv mcp
- Set up environment variables in
.env:
SPIDERFOOT_URL=https://your-spiderfoot-server.com
SPIDERFOOT_USERNAME=your-username
SPIDERFOOT_PASSWORD=your-password
Configuration
The server expects these environment variables:
SPIDERFOOT_URL: Base URL of your SpiderFoot instance (default:http://localhost:5001)SPIDERFOOT_USERNAME: Username for HTTP digest authentication (default:admin)SPIDERFOOT_PASSWORD: Password for authentication (required)
Available MCP Tools
Core Scan Operations
start_scan(target, scan_name, modules?, use_case?)- Start a new scanget_scan_status(scan_id)- Get current scan statuslist_scans()- List all scans on the serverstop_scan(scan_id)- Stop a running scandelete_scan(scan_id)- Delete a scan and its data
Results and Analysis
get_scan_results(scan_id, event_type?)- Get scan resultsget_scan_summary(scan_id, by?)- Get scan summary by module/typeget_scan_log(scan_id, limit?, from_rowid?)- Get scan log entriesexport_scan_results(scan_id, export_format?)- Export results (JSON/CSV/Excel)search_scan_results(query, scan_id?)- Search across results
Utility Functions
ping()- Test server connectivity and get versionget_available_modules()- List available SpiderFoot moduleswait_for_scan_completion(scan_id, poll_interval?, timeout?)- Wait for scan completionget_active_scans_summary()- Get summary of tracked scans
Usage Examples
Starting a Scan
# Passive scan with default modules
start_scan("example.com", "example-scan", use_case="passive")
# Custom scan with specific modules
start_scan("example.com", "custom-scan", modules=["sfp_dnsresolve", "sfp_dnscommonsrv"])
Monitoring Progress
# Check status
status = get_scan_status("scan-id")
# Wait for completion
wait_for_scan_completion("scan-id", poll_interval=5, timeout=300)
Retrieving Results
# Get all results
results = get_scan_results("scan-id")
# Get summary by module
summary = get_scan_summary("scan-id", by="module")
# Export to JSON
export_data = export_scan_results("scan-id", "json")
Running the Server
Development/Testing
python test_client.py # Test client functionality
python server.py # Run MCP server
Production
The server automatically validates environment variables and tests connectivity on startup.
API Implementation Details
Authentication
Uses HTTP Digest Authentication as required by SpiderFoot API v4.0.
Response Handling
The implementation properly handles SpiderFoot's unique response formats:
- List responses:
['SUCCESS', 'data']or['ERROR', 'message'] - Scan data: Arrays with positional fields
[id, name, target, created, started, completed, status, ...] - JSON responses: Standard dictionaries for modules and complex data
Key Fixes Applied
- JSON Accept Header: Required for JSON responses instead of HTML
- Module Specification: Even with use cases, specific modules must be provided
- Parameter Names: Correct parameter names for each endpoint (
idsvsid,byparameter for summaries) - Response Format Handling: Proper parsing of list-based responses
Supported SpiderFoot Versions
- Primary: SpiderFoot v4.0.0
- Compatibility: Should work with SpiderFoot v4.x series
Use Cases
Passive Reconnaissance
start_scan("target-domain.com", "recon-scan", use_case="passive")
Investigation
start_scan("suspicious-domain.com", "investigation", use_case="investigate")
Footprinting
start_scan("company-domain.com", "footprint", use_case="footprint")
Error Handling
The server provides comprehensive error handling with detailed messages:
- Connection failures
- Authentication errors
- Invalid scan parameters
- API endpoint errors
- Response parsing issues
Security Considerations
- Credentials are loaded from environment variables
- HTTP Digest Authentication for API security
- No secrets logged or exposed in responses
- Secure handling of scan data
Troubleshooting
Common Issues
- 404 Errors: Usually indicate incorrect endpoint or missing parameters
- Authentication Failures: Check username/password and server accessibility
- Module Errors: Ensure modules are specified even with use cases
- Connection Issues: Verify server URL and network connectivity
Testing Connectivity
ping_result = ping()
# Check ping_result['success'] and ping_result['server_version']
Debug Logging
Enable debug logging in the client for detailed API call information.
Development Notes
- The implementation follows SpiderFoot's official
sfclipatterns - All endpoints tested against live SpiderFoot v4.0.0 instance
- MCP tools provide structured responses with success/error handling
- Maintains compatibility with both single and batch operations
Contributing
When extending functionality:
- Follow existing error handling patterns
- Handle both list and dict response formats
- Test against actual SpiderFoot instance
- Update documentation for new tools
License
This project follows the same licensing as SpiderFoot for compatibility.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。