hive-mcp

hive-mcp

Transforms idle LAN machines into a unified compute cluster for AI agents to offload CPU-intensive tasks like simulations and backtesting. It provides a broker-worker architecture that integrates with MCP-compatible tools to distribute workloads across a local network.

Category
访问服务器

README

hive-mcp

Distributed compute MCP server — pool idle LAN machines into a compute cluster for AI agents.

The Problem

Running CPU-intensive agentic workloads (backtesting, simulations, hyperparameter sweeps) can peg your host machine at 100% with just 6-7 subagents. Meanwhile, other machines on your LAN sit idle with dozens of cores unused.

The Solution

hive-mcp turns idle machines on your LAN into a unified compute pool, accessible via MCP from Claude Code, Cursor, Copilot, or any MCP-compatible AI tool.


  Host                     Worker A            Worker B
  +-----------------+      +----------------+  +----------------+
  | Claude Code     |      | hive worker    |  | hive worker    |
  | hive-mcp broker |<---->| daemon         |  | daemon         |
  | (MCP + WS)      |  ws  | auto-discovered|  | auto-discovered|
  +-----------------+      +----------------+  +----------------+
        8 cores               14 cores             6 cores
                    = 28 total cores

Quick Start

1. Install

pip install hive-mcp

2. Start the Broker (host machine)

hive broker
# Prints the shared secret and starts listening

3. Join Workers (worker machines)

Worker machines are headless compute — they only need Python and hive-mcp. No Claude Code, no AI tools, no API keys. They just execute tasks and return results.

# Copy the secret from the broker, then:
hive join --secret <token>
# Auto-discovers broker via mDNS — no address needed!

4. Configure Claude Code

Register hive-mcp as an MCP server:

claude mcp add hive-mcp -- hive broker

This writes the config to ~/.claude.json scoped to your current project directory.

Now Claude Code can submit compute tasks to your cluster:

You: "Run backtests for these 20 parameter combinations"

Claude Code: I'll run 6 locally and submit 14 to hive...
  submit_task(code="run_backtest(params_7)", priority=1)
  submit_task(code="run_backtest(params_8)", priority=1)
  ...

MCP Tools

Tool Description
submit_task Submit a Python or shell task to the cluster
get_task_status Check if a task is queued, running, or complete
get_task_result Retrieve the output of a completed task
pull_task Pull a queued task back for local execution
report_local_result Report result of a locally-executed pulled task
cancel_task Cancel a pending or running task
list_workers See all connected workers and their capacity
get_cluster_status Overview of the entire cluster

Features

  • Zero-config discovery — workers find the broker automatically via mDNS
  • Adaptive capacity — workers monitor CPU and reject tasks when overloaded (--max-cpu 80)
  • File transfer — send input files to workers, collect output files back
  • Local fallback — pull queued tasks back when local CPU frees up
  • Subprocess isolation — tasks can't crash the worker daemon
  • Priority queue — higher-priority tasks run first
  • Auto-reconnect — workers reconnect with exponential backoff
  • Claude Code hookhive context injects cluster info into every prompt
  • Python SDK — programmatic access via HiveClient
  • Shell tasks — run shell commands, not just Python

CLI Reference

hive broker                     # Start broker + MCP server
hive join                       # Join as worker (auto-discover broker)
hive join --broker-addr IP:PORT # Join with explicit address
hive join --max-cpu 60          # Limit CPU usage to 60%
hive join --max-tasks 4         # Hard cap at 4 concurrent tasks
hive status                     # Show cluster status
hive secret                     # Show/generate shared secret
hive context                    # Output machine + cluster info (for hooks)
hive tls-setup                  # Generate self-signed TLS certificates

Claude Code Hook

Add automatic cluster awareness to every prompt:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "command": "hive context",
        "timeout": 3000
      }
    ]
  }
}

This injects:

[hive-mcp] Local machine: 8 cores / 16 threads, CPU: 45%, RAM: 14GB free / 32GB total
[hive-mcp] Cluster: 2 workers online (20 cores), 0 queued, 3 active
[hive-mcp] Tip: 20 remote cores available via hive. Use submit_task() for overflow.

Python SDK

from hive_mcp.client.sdk import HiveClient

async with HiveClient("192.168.1.100", 7933, secret="...") as client:
    task = await client.submit("print('hello from hive')")
    result = await client.wait(task["task_id"])
    print(result["stdout"])  # "hello from hive"

How It Works

  1. Broker runs on the host machine alongside Claude Code. It's both an MCP server (stdio, for Claude Code) and a WebSocket server (for workers).
  2. Workers run on worker machines. They discover the broker via mDNS, authenticate with a shared secret, and wait for tasks.
  3. Tasks are Python code strings or shell commands. The broker serializes them with cloudpickle and dispatches to workers.
  4. Workers execute tasks in isolated subprocesses — a hung or crashing task can't affect the worker daemon.
  5. Results flow back through WebSocket, including stdout, stderr, return values, and output files.

Security

  • Shared secret — broker generates a 32-byte random token; workers must present it to connect
  • TLS (optional) — run hive tls-setup to generate self-signed certificates
  • Subprocess isolation — tasks run in separate processes, not in the worker daemon

Troubleshooting

Windows: MCP tools not loading

There is a known Claude Code bug where Windows drive letter casing (c:/ vs C:/) creates duplicate project entries in ~/.claude.json. The MCP config ends up under one casing while Claude Code looks up the other.

Fix: Open ~/.claude.json, search for your project path in the "projects" object, and ensure both case variants have identical mcpServers config. Or re-run claude mcp add from the same terminal type you use for Claude Code sessions.

Broker not starting

Check ~/.hive/broker.log for startup errors. Common causes:

  • Port 7933 already in use (another broker instance)
  • Python version mismatch between hive CLI and expected environment

Requirements

  • Python 3.10+
  • All machines on the same LAN (for mDNS discovery)
  • Same Python version on broker and workers (for cloudpickle compatibility)

License

MIT

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选