42Crunch MCP Server
Enables AI assistants to interact with the 42Crunch API security platform to search collections, retrieve API definitions, and access security assessments for OpenAPI specifications.
README
42crunch MCP Server
A Model Context Protocol (MCP) server that enables AI assistants to interact with the 42crunch API security platform. This server provides tools to search collections, retrieve API definitions, and access security assessments.
Features
- List Collections: Search and retrieve API collections from 42crunch
- Get Collection APIs: Retrieve all APIs within a specific collection
- Get API Details: Access detailed API information including OpenAPI definitions, assessments, and scan results
- Web UI: Streamlit-based chat interface with LangChain for natural language interaction
Prerequisites
- Python 3.11 or higher
- A 42crunch account and API token
- (Optional) LLM API key for UI (OpenAI, Claude, or Gemini)
Quick Start
1. Installation
# Clone or navigate to the project
cd mcp
# Install server dependencies
pip install -r requirements.txt
# Install UI dependencies
cd ui
pip install -r requirements.txt
cd ..
2. Configuration
Create a .env file in the project root:
# Required: 42crunch API token
42C_TOKEN=your_42crunch_api_token_here
Get your API token from the 42crunch platform.
3. Start the MCP Server
Option A: Using the startup script (Recommended)
./.ci/start_mcp_server.sh
Option B: Manual start
python http_main.py
The server will start on http://localhost:8000 by default.
Verify server is running:
curl http://localhost:8000/health
4. Start the UI
Option A: From project root
cd ui
streamlit run app.py
Option B: From UI directory
streamlit run ui/app.py
The UI will open in your browser at http://localhost:8501.
5. Configure UI API Key
The UI needs an LLM API key (choose one provider):
OpenAI:
# Option 1: Add to ~/.ai_tokens
echo "OPENAI_API_KEY=your_key" >> ~/.ai_tokens
# Option 2: Environment variable
export OPENAI_API_KEY=your_key
Claude (Anthropic):
echo "ANTHROPIC_API_KEY=your_key" >> ~/.ai_tokens
Gemini (Google):
echo "GOOGLE_API_KEY=your_key" >> ~/.ai_tokens
Or enter the API key directly in the UI sidebar.
Complete Setup Guide
Installation
Running the MCP Server
The server can run in two modes:
1. Stdio Mode (Default - for MCP clients)
Run the server directly in the foreground:
python main.py
Or use the module directly:
python -m src.server
2. HTTP Mode (for web/remote clients)
Start the HTTP server:
# Default (port 8000)
python http_main.py
# Or use the startup script
./.ci/start_mcp_server.sh
Custom host/port:
python http_main.py --host 0.0.0.0 --port 8000
Development mode (auto-reload):
python http_main.py --reload
HTTP Server Endpoints:
POST /jsonrpc- JSON-RPC 2.0 endpointGET /health- Health checkGET /tools- List available toolsGET /docs- Interactive API documentation (Swagger UI)GET /redoc- Alternative API documentation (ReDoc)
Running the UI
The UI is a Streamlit application that uses LangChain to interact with the MCP server.
Start the UI:
# From project root
cd ui
streamlit run app.py
# Or from project root
streamlit run ui/app.py
The UI will be available at http://localhost:8501
UI Features:
- 🤖 Multi-provider LLM support (OpenAI, Claude, Gemini)
- 🛠️ Automatic tool calling based on user queries
- 💬 Chat interface with tool call visibility
- ⚙️ Configurable MCP server URL and API keys
UI Configuration:
- MCP Server URL: Default
http://localhost:8000(configurable in sidebar) - LLM Provider: Choose from OpenAI, Claude, or Gemini
- API Keys: Set in
~/.ai_tokens, environment variables, or UI sidebar
See UI README for detailed UI documentation.
Background/Daemon Mode
Run the server as a daemon in the background:
# Using helper script (recommended)
./.ci/start_mcp_server.sh
# Or using command-line options
python http_main.py --daemon --pidfile ./42crunch-mcp.pid
Helper Scripts (in .ci/ directory):
./.ci/start_mcp_server.sh- Start HTTP server in background./.ci/stop_mcp_server.sh- Stop running server./.ci/status_mcp_server.sh- Check server status
Legacy Scripts (in scripts/ directory):
scripts/start_daemon.sh- Start stdio server as daemonscripts/stop_daemon.sh- Stop running daemonscripts/status_daemon.sh- Check daemon status
Systemd Service (Linux)
For production deployments, use the provided systemd service file:
# Copy service file
sudo cp 42crunch-mcp.service /etc/systemd/system/
# Edit the service file to match your installation paths
sudo nano /etc/systemd/system/42crunch-mcp.service
# Reload systemd
sudo systemctl daemon-reload
# Enable and start service
sudo systemctl enable 42crunch-mcp
sudo systemctl start 42crunch-mcp
# Check status
sudo systemctl status 42crunch-mcp
# View logs
sudo journalctl -u 42crunch-mcp -f
Note: MCP servers typically communicate via stdio (stdin/stdout). When running as a daemon, ensure your MCP client is configured to connect via the appropriate transport mechanism (named pipes, sockets, or HTTP if implemented).
MCP Tools
The server exposes three tools:
1. list_collections
List all API collections with pagination support.
Parameters:
page(optional, int): Page number (default: 1)per_page(optional, int): Items per page (default: 10)order(optional, str): Sort order (default: "default")sort(optional, str): Sort field (default: "default")
Example:
result = list_collections(page=1, per_page=20)
2. get_collection_apis
Get all APIs within a specific collection.
Parameters:
collection_id(required, str): Collection UUIDwith_tags(optional, bool): Include tags in response (default: True)
Example:
result = get_collection_apis(
collection_id="3dae40d4-0f8b-42f9-bc62-2a2c8a3189ac",
with_tags=True
)
3. get_api_details
Get detailed information about a specific API.
Parameters:
api_id(required, str): API UUIDbranch(optional, str): Branch name (default: "main")include_definition(optional, bool): Include OpenAPI definition (default: True)include_assessment(optional, bool): Include assessment data (default: True)include_scan(optional, bool): Include scan results (default: True)
Example:
result = get_api_details(
api_id="75ec1f35-8261-402f-8240-1a29fbcb7179",
branch="main",
include_definition=True,
include_assessment=True,
include_scan=True
)
Project Structure
mcp/
├── .env # Environment variables (42C_TOKEN)
├── requirements.txt # Python dependencies
├── README.md # This file
├── src/
│ ├── __init__.py
│ ├── server.py # Main MCP server implementation
│ ├── client.py # 42crunch API client
│ └── config.py # Configuration management
└── tests/
├── __init__.py
├── test_server.py # Server tests
└── test_client.py # API client tests
Configuration
The server uses environment variables for configuration:
42C_TOKEN: Your 42crunch API token (required)
The token is automatically loaded from the .env file or environment variables.
Error Handling
The server handles various error scenarios:
- Authentication failures: Returns error message if token is invalid
- Rate limiting: Handles 429 responses appropriately
- Invalid IDs: Validates collection and API IDs before making requests
- Network errors: Provides clear error messages for connection issues
All tools return a response dictionary with:
success: Boolean indicating if the operation succeededdata: Response data (if successful)error: Error message (if failed)error_type: Type of error (if failed)
Testing
Unit Tests
Run unit tests using pytest:
pytest tests/
For verbose output:
pytest tests/ -v
Integration Tests
Integration tests assume the servers are running. Start them first:
Start servers:
# Terminal 1: Start HTTP server
python http_main.py
# Terminal 2: Start MCP stdio server (for stdio tests)
python main.py
Run tests:
# Run all unit tests
pytest tests/unit/ -v
# Run all integration tests
pytest tests/integration/ -v
# Test HTTP server only
pytest tests/integration/test_http.py -v
# Test MCP stdio server only
pytest tests/integration/test_mcp.py -v
# Test both servers (comparison tests)
pytest tests/integration/test_combined.py -v
# Run all tests
pytest tests/ -v
# Run with specific options
pytest tests/integration/test_http.py --api-id <uuid> -v
pytest tests/integration/test_mcp.py --skip-collection-tests -v
Integration test options:
--skip-collection-tests- Skip collection-related tests--skip-api-tests- Skip API-related tests--api-id <uuid>- Provide API ID for testing get_api_details
Integration Tests with Test Client
Use the provided test client to test the MCP server end-to-end:
Full test suite:
python test_client.py
Test with specific IDs:
# Test with a collection ID
python test_client.py --collection-id <collection-uuid>
# Test with an API ID
python test_client.py --api-id <api-uuid>
# Test with both
python test_client.py --collection-id <uuid> --api-id <uuid>
Simple quick test:
python test_simple.py
Test HTTP server:
# First, start the HTTP server in another terminal:
python http_main.py
# Then in another terminal, run the HTTP test client:
python tests/clients/test_http_client.py
# Or test with specific IDs:
python tests/clients/test_http_client.py --collection-id <uuid> --api-id <uuid>
Test MCP stdio server:
# Run the stdio test client (starts server automatically):
python tests/clients/test_client.py
# Or test with specific IDs:
python tests/clients/test_client.py --collection-id <uuid> --api-id <uuid>
The test clients will:
- Start/connect to the MCP server
- Send JSON-RPC requests (via stdio or HTTP)
- Read responses
- Validate the responses
- Report test results
Example output:
============================================================
42crunch MCP Server Test Client
============================================================
Starting MCP server: python main.py
Server started (PID: 12345)
============================================================
TEST: list_collections
============================================================
📤 Request: {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "list_collections",
"arguments": {
"page": 1,
"per_page": 10
}
}
}
📥 Response: {
"jsonrpc": "2.0",
"id": 1,
"result": {
"success": true,
"data": {...}
}
}
✅ list_collections succeeded
Development
Adding New Tools
To add a new MCP tool:
- Add the corresponding method to
src/client.py - Create a tool function in
src/server.pyusing the@mcp.tool()decorator - Wire up the client method to the tool
- Add tests in
tests/test_server.py
Code Style
This project follows PEP 8 style guidelines. Consider using a formatter like black or ruff for consistent code style.
License
[Add your license here]
Support
For issues related to:
- 42crunch API: Contact 42crunch support
- MCP Server: Open an issue in this repository
References
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。