Cyberbro MCP Server

Cyberbro MCP Server

An MCP server that extracts Indicators of Compromise (IoCs) from unstructured text and checks their reputation across multiple threat intelligence services. It enables real-time analysis of IPs, domains, hashes, and URLs, providing enriched context for security workflows within LLMs.

Category
访问服务器

README

MseeP.ai Security Assessment Badge

<h1 align="center">Cyberbro MCP Server</h1>

<p align="center"> <img src="https://github.com/user-attachments/assets/5e5a4406-99c1-47f1-a726-de176baa824c" width="90" /><br /> <b><i>A simple application that extracts your IoCs from garbage input and checks their reputation using multiple services.</i></b> <br /> <b>🌐 <a href="https://demo.cyberbro.net/">demo.cyberbro.net</a></b><br />

</p>

mcp-cyberbro-demo

A Model Context Protocol (MCP) server for Cyberbro that provides a comprehensive interface for extracting and analyzing Indicators of Compromise (IoCs) from unstructured input, and checking their reputation using multiple threat intelligence services.

Checkout Cyberbro repository for more information about the platform.

Overview

This MCP server enables interaction with the Cyberbro platform through the Model Context Protocol. MCP is a standard that allows applications to provide context and functionality to Large Language Models (LLMs) in a secure, standardized way—similar to a web API, but designed for LLM integrations.

MCP servers can:

  • Expose data through Resources (to load information into the LLM's context)
  • Provide functionality through Tools (to execute code or perform actions)
  • Define interaction patterns through Prompts (reusable templates for LLM interactions)

This server implements the Tools functionality of MCP, offering a suite of tools for extracting IoCs from text, analyzing them, and checking their reputation across various threat intelligence sources. It allows AI systems like Claude to retrieve, analyze, and act on threat intelligence in real-time.

Features

  • Multi-Service Reputation Checks: Query IPs, domains, hashes, URLs, and Chrome extension IDs across many threat intelligence sources.
  • Integrated Reporting: Get detailed, exportable reports and analysis history.
  • Platform Integrations: Supports Microsoft Defender for Endpoint, CrowdStrike, OpenCTI, and more.
  • Advanced Search & Visualization: Search with Grep.App, check for breaches, and visualize results.

Why Use Cyberbro with LLMs

  • LLM-Ready: Designed for seamless use via MCP with Claude or other LLMs—no manual UI needed.
  • Beginner-Friendly: Simple, accessible, and easy to deploy.
  • Unique Capabilities: Chrome extension ID lookups, advanced TLD handling, and pragmatic intelligence gathering.
  • Comprehensive CTI Access: Leverages multiple sources and integrates CTI reports for enriched context.

Installation

Option 1: Using Docker (Recommended)

  1. Export your Cyberbro config as an environment variable:

     export CYBERBRO_URL=http://localhost:5000
    
  2. Pull the Docker image from GitHub Container Registry (careful, you must be logged in):

    docker pull ghcr.io/stanfrbd/mcp-cyberbro:latest
    

Option 2: Local Installation

  1. Clone this repository:

    git clone https://github.com/stanfrbd/mcp-cyberbro.git
    cd mcp-cyberbro
    
  2. Install the required dependencies:

    uv run pip install -r requirements.txt
    
  3. Set environment variables for MCP configuration or provide them as CLI arguments:

    Option A: Using environment variables

    export CYBERBRO_URL=http://localhost:5000
    

    Option B: Using CLI arguments

    uv run mcp-cyberbro-server.py --cyberbro_url http://localhost:5000
    
  4. Start the MCP server:

    uv run mcp-cyberbro-server.py # env variables already set
    

    The server will listen for MCP protocol messages on stdin/stdout and use the environment variables as shown in the Claude Desktop configuration example.

Optional environment variables

  • SSL_VERIFY: Set to false to disable SSL verification for the Cyberbro URL. This is useful for self-signed certificates or local testing.
  • API_PREFIX: Set to a custom prefix for the Cyberbro API. This is useful if you have a custom API prefix in your Cyberbro instance.

Optional arguments

  • --no_ssl_verify: Disable SSL verification for the Cyberbro URL. This is useful for self-signed certificates or local testing.
  • --api_prefix: Set a custom prefix for the Cyberbro API. This is useful if you have a custom API prefix in your Cyberbro instance.

Usage

Using with Claude Desktop (Docker) - Recommended

[!NOTE] In this configuration, make sure Docker is installed and running on your machine (e.g., Docker Desktop).

To use this MCP server with Claude Desktop, add the following to your Claude Desktop config file (claude_desktop_config.json):

"mcpServers": {
  "cyberbro": {
    "command": "docker",
    "args": [
      "run",
      "-i",
      "--rm",
      "-e",
      "CYBERBRO_URL",
      "-e",
      "API_PREFIX",
      "ghcr.io/stanfrbd/mcp-cyberbro:latest"
    ],
    "env": {
      "CYBERBRO_URL": "http://localhost:5000",
      "API_PREFIX": "api"
    }
  }
}

Using with Claude Desktop (Local)

[!WARNING] In this configuration, make sure to use venv or uv to avoid conflicts with other Python packages.

To use this MCP server with Claude Desktop locally, add the following to your Claude Desktop config file (claude_desktop_config.json):

"mcpServers": {
  "cyberbro": {
    "command": "uv",
    "args": [
      "run",
      "C:\\Users\\path\\to\\mcp-cyberbro-server.py"
    ],
    "env": {
      "CYBERBRO_URL": "http://localhost:5000",
      "API_PREFIX": "api"
    }
  }
}

[!IMPORTANT] Make sure you have exported your Cyberbro config as environment variables (e.g., CYBERBRO_URL and API_PREFIX) before starting Claude Desktop. This ensures the MCP server can connect to your Cyberbro instance correctly.

Using with other LLMs and MCP Clients

This MCP server can be used with any LLM or MCP client that supports the Model Context Protocol. The server listens for MCP protocol messages on stdin/stdout, making it compatible with various LLMs and clients. BUT, it is important to note that the server is designed to work with LLMs that can interpret and execute the MCP commands correctly. I tried it personlly with OpenAI (in Open Web UI) and it is not as good as Claude Desktop.

Documentation for other LLMs and MCP clients with Open Web UI: https://docs.openwebui.com/openapi-servers/mcp/

It uses a OpenAPI proxy to expose the MCP server as an OpenAPI server, allowing you to interact with it using standard HTTP requests. This makes it easy to integrate with other applications and services that support OpenAPI.

Example of usage with OpenAPI Proxy

[!TIP] Make sure to install mcpo via pip install mcpo or via uv

  1. Creata a config.json file in the mcp folder with the following content:
{
    "mcpServers": {
        "cyberbro": {
            "command": "uv",
            "args": [
                "run",
                "./mcp-cyberbro-server.py"
            ],
            "env": {
                "CYBERBRO_URL": "https://cyberbro.lab.local",
                "API_PREFIX": "api"
            }
        }
    }
}
  1. Run the MCP server:
uvx mcpo --config config.json --port 8000
  1. The server will start and listen for requests on port 8000. You can access the OpenAPI documentation (for instance) at http://localhost:8000/docs.
Starting MCP OpenAPI Proxy with config file: config.json
2025-05-21 14:15:01,480 - INFO - Starting MCPO Server...
2025-05-21 14:15:01,480 - INFO -   Name: MCP OpenAPI Proxy
2025-05-21 14:15:01,480 - INFO -   Version: 1.0
2025-05-21 14:15:01,480 - INFO -   Description: Automatically generated API from MCP Tool Schemas
2025-05-21 14:15:01,480 - INFO -   Hostname: docker-services
2025-05-21 14:15:01,480 - INFO -   Port: 8000
2025-05-21 14:15:01,480 - INFO -   API Key: Not Provided
2025-05-21 14:15:01,480 - INFO -   CORS Allowed Origins: ['*']
2025-05-21 14:15:01,480 - INFO -   Path Prefix: /
2025-05-21 14:15:01,481 - INFO - Loading MCP server configurations from: config.json
2025-05-21 14:15:01,481 - INFO - Configured MCP Servers:
2025-05-21 14:15:01,481 - INFO -   Configuring Stdio MCP Server 'cyberbro' with command: uv with args: ['run', './mcp-cyberbro-server.py']
2025-05-21 14:15:01,481 - INFO - Uvicorn server starting...
INFO:     Started server process [7331]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

You must then choose the correct configuration for your LLM / desktop app.

You can configure your MCP client to connect to the server (for instance) at http://localhost:8000/cyberbro.

The OpenAPI specification will be available (for instance) at http://localhost:8000/cyberbro/openapi.json.

Example with Open Web UI

[!IMPORTANT] Make sure you use Native function calling and a MCP compatible model (e.g. OpenAI: gpt-4o)

image

Doc here: https://docs.openwebui.com/openapi-servers/open-webui#optional-step-4-use-native-function-calling-react-style-tool-use-

Available Tools

The MCP server provides the following tools:

Tool List

Tool Name Description Arguments
analyze_observable Extracts and analyzes IoCs from input text using selected engines. Returns analysis ID. text (string), engines (list, optional)
is_analysis_complete Checks if the analysis for a given ID is finished. Returns status. analysis_id (string)
get_analysis_results Retrieves the results of a completed analysis by ID. analysis_id (string)
get_engines Lists available analysis engines supported by Cyberbro. (none)
get_web_url Returns the web URL for the Cyberbro instance. analysis_id

Tool Details

  • analyze_observable

    • Purpose: Extracts indicators from unstructured text and submits them for analysis.
    • Arguments:
      • text (required): The input text containing IoCs.
      • engines (optional): List of engines to use for analysis.
    • Returns: JSON with analysis ID and submission details.
  • is_analysis_complete

    • Purpose: Checks if the analysis for a given analysis_id is complete.
    • Arguments:
      • analysis_id (required): The ID returned by analyze_observable.
    • Returns: JSON with completion status.
  • get_analysis_results

    • Purpose: Retrieves the results of a completed analysis.
    • Arguments:
      • analysis_id (required): The ID of the analysis.
    • Returns: JSON with analysis results.
  • get_engines

    • Purpose: Lists all available analysis engines.
    • Arguments: None.
    • Returns: JSON with available engines.
  • get_web_url

    • Purpose: Returns the web URL for the Cyberbro instance.
    • Arguments:
      • analysis_id (required): The ID of the analysis.
    • Returns: JSON with the web URL.

Example Queries

Here are some example queries you can run using the MCP server with an LLM like Claude:

Getting Indicator Details

Cyberbro: Check indicators for target.com
Can you check this IP reputation with Cyberbro? 192.168.1.1
Use github, google and virustotal engines.
I want to analyze the domain example.com. What can Cyberbro tell me about it?
Use max 3 engines.
Analyze these observables with Cyberbro: suspicious-domain.com, 8.8.8.8, and 44d88612fea8a8f36de82e1278abb02f. Use all available engines.

Observable Analysis

I found this (hash|domain|url|ip|extension) Can you submit it for analysis to Cyberbro and analyze the results?

These example queries show how Cyberbro leverages LLMs to interpret your intent and automatically select the right MCP tools, allowing you to interact with Cyberbro easily—without needing to make the analysis yourself.

OSINT investigation

Create an OSINT report for the domain example.com using Cyberbro.
Use all available engines. and pivot on the results for more information.
Use a maximum of 10 analysis requests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Acknowledgments

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选