Vectra AI MCP Server

Vectra AI MCP Server

Connects AI assistants to the Vectra AI security platform to enable intelligent analysis of threat detection data and automated incident response workflows. It allows users to investigate threats, take response actions, and generate security reports using natural language.

Category
访问服务器

README

Vectra AI MCP Server

This project implements an MCP server for the Vectra AI Platform.

What is Vectra AI MCP?

An MCP server that connects AI assistants to your Vectra AI security platform, enabling intelligent analysis of threat detection data, security insights, and automated incident response workflows. Compatible with Claude, ChatGPT, Cursor, VS Code and other MCP-enabled AI tools.

<p align="center"> <img src="assets/mcp-diagram.png" alt="mcp-diagram" width="60%" align="center"/> </p>

What can you do with Vectra AI MCP?

  • Investigate threats in natural language
  • Take response actions in Vectra directly from your AI agent
  • Correlate and analyze security data using prompts
  • Dynamically build advanced visulizations for analysis
  • Generate investigation reports from natural language

Setup - Host Locally

Prerequisites

  1. Install Python Check .python-version file for the required version

  2. Install uv - Python package manager

# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# On Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

# Or via pip
pip install uv

Setup Steps

  1. Clone/Download the project to your local machine
  2. Navigate to the project directory:
cd your-project-directory
  1. Configure environment variables:
# Copy the example environment file
cp .env.example .env

Then edit the .env file with your actual Vectra AI Platform credentials. Required variables to update:

  • VECTRA_BASE_URL: Your Vectra portal URL
  • VECTRA_CLIENT_ID: Your client ID from Vectra
  • VECTRA_CLIENT_SECRET: Your client secret from Vectra
  1. Create and activate a virtual environment:
uv venv

# Activate it:
# On macOS/Linux:
source .venv/bin/activate

# On Windows:
.venv\Scripts\activate
  1. Install dependencies:
uv sync

This will install all dependencies specified in pyproject.toml using the exact versions from uv.lock.

  1. Run the application:

The server supports multiple transport protocols:

# Run with stdio transport (default, for Claude Desktop)
python server.py
python server.py --transport stdio

# Run with SSE transport (for HTTP-based MCP clients)
python server.py --transport sse --host 0.0.0.0 --port 8000

# Run with streamable-http transport (for production HTTP deployments)
python server.py --transport streamable-http --host 0.0.0.0 --port 8000

# Enable debug logging
python server.py --debug

Transport Options:

  • stdio: Standard input/output communication (default, used by Claude Desktop)
  • sse: Server-Sent Events over HTTP (good for web-based clients)
  • streamable-http: Streamable HTTP transport (recommended for production HTTP deployments)

Environment Variables: You can also configure the server using environment variables:

export VECTRA_MCP_TRANSPORT=streamable-http
export VECTRA_MCP_HOST=0.0.0.0
export VECTRA_MCP_PORT=8000
export VECTRA_MCP_DEBUG=true
python server.py

MCP Configuration for Claude Desktop

  1. Add MCP Server to Claude Desktop:
# On macOS:
# Open Claude Desktop configuration file
code ~/Library/Application\ Support/Claude/claude_desktop_config.json

# On Windows:
# Open Claude Desktop configuration file
notepad %APPDATA%/Claude/claude_desktop_config.json

Add the following configuration to the mcpServers section (update the paths to match your setup):

{
  "mcpServers": {
    "vectra-ai-mcp": {
      "command": "/path/to/your/uv/binary",
      "args": [
        "--directory",
        "/path/to/your/project/directory",
        "run",
        "server.py"
      ]
    }
  }
}

Example with actual paths:

{
  "mcpServers": {
    "vectra-ai-mcp": {
      "command": "/Users/yourusername/.local/bin/uv",
      "args": [
        "--directory",
        "/Users/yourusername/path/to/vectra-mcp-project",
        "run",
        "server.py"
      ]
    }
  }
}
  1. Debug - Find your uv installation path:
# Find where uv is installed
which uv
# or
where uv
  1. Debug - Get your project's absolute path:
# From your project directory, run:
pwd
  1. Restart Claude Desktop to load the new MCP server configuration.

Other MCP Client Setup

Once configured, you should be able to use Vectra AI Platform capabilities directly within Claude Desktop or other MCP clients through this MCP server!

For other MCP clients besides Claude Desktop, refer to the documentation links below:

MCP Client Documentation Link
General MCP Setup https://modelcontextprotocol.io/quickstart/user
Cursor https://docs.cursor.com/en/context/mcp#using-mcp-json
VS Code https://code.visualstudio.com/docs/copilot/chat/mcp-servers#_add-an-mcp-server

For other MCP clients, refer to their respective documentation. The general pattern is similar - you'll need to specify the command and arguments to run the MCP server with the same configuration structure.

Setup - Docker Deployment

For production deployments or easier setup, you can run the Vectra AI MCP Server using Docker. We provide two options:

Option 1: Using Pre-built Images (Recommended)

The easiest way to get started is using our pre-built Docker images from GitHub Container Registry.

Prerequisites

Quick Start Steps

  1. Configure environment variables:
# Copy the example environment file
cp .env.example .env

Then edit the .env file with your actual Vectra AI Platform credentials.

  1. Run with pre-built image:

Streamable HTTP Transport (Recommended for Production)

docker run -d \
  --name vectra-mcp-server-http \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=streamable-http \
  -e VECTRA_MCP_HOST=0.0.0.0 \
  -e VECTRA_MCP_PORT=8000 \
  -p 8000:8000 \
  --restart unless-stopped \
  ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest

SSE Transport (Server-Sent Events)

docker run -d \
  --name vectra-mcp-server-sse \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=sse \
  -e VECTRA_MCP_HOST=0.0.0.0 \
  -e VECTRA_MCP_PORT=8000 \
  -p 8000:8000 \
  --restart unless-stopped \
  ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest

Stdio Transport (For Local MCP Clients)

docker run -d \
  --name vectra-mcp-server-stdio \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=stdio \
  --restart unless-stopped \
  ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
  1. Or use Docker Compose (Alternative):

Create a docker-compose.yml file:

version: '3.8'
services:
  vectra-mcp-server:
    image: ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
    container_name: vectra-mcp-server
    env_file: .env
    environment:
      - VECTRA_MCP_TRANSPORT=streamable-http
      - VECTRA_MCP_HOST=0.0.0.0
      - VECTRA_MCP_PORT=8000
    ports:
      - "8000:8000"
    restart: unless-stopped

Then run:

docker-compose up -d

Available Tags:

  • latest: Latest stable build from main branch
  • main: Latest build from main branch (same as latest)
  • v*: Specific version tags (e.g., v1.0.0)

💡 Tip: Pre-built images are automatically built and published via GitHub Actions whenever code is pushed to the main branch or when releases are tagged. This ensures you always get the latest tested version without needing to build locally.

Option 2: Build from Source

For development or customization, you can build the Docker image from source.

Prerequisites

  1. Install Docker and Docker Compose
    • Docker Desktop (includes Docker Compose)
    • Or install Docker Engine and Docker Compose separately on Linux

Build from Source Steps

  1. Clone/Download the project to your local machine
  2. Navigate to the project directory:
cd your-project-directory
  1. Configure environment variables:
# Copy the example environment file
cp .env.example .env

Then edit the .env file with your actual Vectra AI Platform credentials.

  1. Build and run with Docker:
# Build the image
docker build -t vectra-mcp-server .
  1. Run the locally built image:

Choose your transport mode and run with the locally built image:

Streamable HTTP Transport

docker run -d \
  --name vectra-mcp-server-http \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=streamable-http \
  -e VECTRA_MCP_HOST=0.0.0.0 \
  -e VECTRA_MCP_PORT=8000 \
  -p 8000:8000 \
  --restart unless-stopped \
  vectra-mcp-server

SSE Transport

docker run -d \
  --name vectra-mcp-server-sse \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=sse \
  -e VECTRA_MCP_HOST=0.0.0.0 \
  -e VECTRA_MCP_PORT=8000 \
  -p 8000:8000 \
  --restart unless-stopped \
  vectra-mcp-server

Stdio Transport

docker run -d \
  --name vectra-mcp-server-stdio \
  --env-file .env \
  -e VECTRA_MCP_TRANSPORT=stdio \
  --restart unless-stopped \
  vectra-mcp-server

Docker Environment Variables

The Docker container supports all the same environment variables as the local setup, plus additional MCP server configuration:

MCP Server Configuration

  • VECTRA_MCP_TRANSPORT: Transport protocol (stdio, sse, or streamable-http) - default: stdio
  • VECTRA_MCP_HOST: Host to bind to for HTTP transports - default: 0.0.0.0
  • VECTRA_MCP_PORT: Port for HTTP transports - default: 8000
  • VECTRA_MCP_DEBUG: Enable debug logging - default: false

Accessing the HTTP Server

When running with HTTP transports (sse or streamable-http), the MCP server will be available at:

  • Streamable HTTP: http://localhost:8000/mcp
  • SSE: http://localhost:8000/sse

MCP Client Configuration for Docker

For HTTP-based MCP clients connecting to the Dockerized server, use the appropriate endpoint:

{
  "mcpServers": {
    "vectra-ai-mcp": {
      "transport": {
        "type": "http",
        "url": "http://localhost:8000/"
      }
    }
  }
}

Docker Health Checks

The Docker container includes health checks that will verify the server is running properly:

  • For stdio transport: Always reports healthy (no HTTP endpoint to check)
  • For HTTP transports: Checks HTTP endpoint availability

Note: MCP (Model Context Protocol) is an emerging and rapidly evolving technology. Exercise caution when using this server and follow security best practices, including proper credential management and network security measures.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选