blend-ai

blend-ai

An MCP server that enables AI assistants to control Blender through 108 specialized tools for 3D modeling, animation, and rendering. It provides a secure, thread-safe interface to execute validated operations in Blender using natural language commands.

Category
访问服务器

README

blend-ai

The most intuitive and efficient MCP Server for Blender. Control Blender entirely through AI assistants like Claude — create 3D models, set up scenes, animate, render, and more, all through natural language.

<small>This was created via Claude Code using the Haiku model and 20 random reference images. It took 5 minutes:</small>

blend-ai screenshot

Key Features

  • 161 tools covering every major Blender domain: modeling, mesh editing, materials, shader nodes, lighting, camera, animation, rendering, sculpting, UV mapping, physics, geometry nodes, rigging, curves, grease pencil, collections, file I/O, Bool Tool, and viewport control
  • Render-aware — automatically detects when Blender is rendering and queues commands instead of hanging
  • Zero telemetry — no usage tracking, no analytics, no data collection. Everything runs locally.
  • Zero-dependency Blender addon — the addon uses only Python stdlib + bpy. Nothing to pip install inside Blender's bundled Python.
  • Thread-safe architecture — background TCP server with queue-based main-thread execution, respecting Blender's single-threaded API constraint
  • MCP resources — browse scene objects, materials, and scene info as structured context
  • Workflow prompts — pre-built prompt templates for common tasks (product shots, character base meshes, scene cleanup, turntable animations)
  • Best practices prompt — guides AI clients toward preferred tools (e.g., Bool Tool auto ops over manual boolean modifiers)

Quickstart

1. Install the MCP server

git clone https://github.com/jabberwock/blend-ai.git
cd blend-ai
uv pip install -e .

2. Install the Blender addon

  1. Download the latest addon zip from GitHub Releases
  2. Open Blender (4.0+)
  3. Go to Edit > Preferences > Add-ons > Install from Disk...
  4. Select the downloaded .zip file
  5. Enable "blend-ai" in the addon list

<details> <summary><strong>Developer install (symlink)</strong></summary>

If you're developing on blend-ai, symlink the addon folder instead:

# macOS
ln -s "$(pwd)/addon" ~/Library/Application\ Support/Blender/5.0/scripts/addons/blend_ai

# Linux
ln -s "$(pwd)/addon" ~/.config/blender/5.0/scripts/addons/blend_ai

# Windows (run as admin)
mklink /D "%APPDATA%\Blender Foundation\Blender\5.0\scripts\addons\blend_ai" "%cd%\addon"

Then enable the addon in Blender preferences.

</details>

3. Start the server in Blender

In Blender's 3D Viewport, open the N-panel (press N), find the blend-ai tab, and click Start Server. The addon listens on 127.0.0.1:9876.

4. Connect your AI assistant

<details> <summary><strong>Claude Code</strong></summary>

claude mcp add blend-ai -- uv run --directory /path/to/blend-ai blend-ai

Replace /path/to/blend-ai with the actual path to your clone. Make sure Blender is running with the addon server started before using the tools.

Usage:

$ claude

> Create a red metallic sphere on a white plane with three-point lighting

> Add a subdivision surface modifier to the sphere and set it to level 3

> Set up a turntable animation and render it to /tmp/turntable/

</details>

<details> <summary><strong>Claude Desktop</strong></summary>

Add blend-ai to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "blend-ai": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/blend-ai", "blend-ai"]
    }
  }
}

Replace /path/to/blend-ai with the actual path to your clone. Or copy the contents of the bundled mcp.json into your config file.

Restart Claude Desktop. The Blender tools will appear in the tool list.

</details>

<details> <summary><strong>Other MCP Clients</strong></summary>

blend-ai is a standard MCP server using stdio transport. Any MCP-compatible client can connect using the mcp.json config or by running the server directly:

uv run --directory /path/to/blend-ai blend-ai
# or: python -m blend_ai.server

The server communicates over stdin/stdout using the MCP protocol. It connects to Blender's addon over TCP on 127.0.0.1:9876.

</details>

Tool Domains

<details> <summary><strong>All 161 tools across 24 modules</strong></summary>

Domain Tools Highlights
Scene 5 Get scene info, set frame range, manage scenes
Objects 14 Create primitives, duplicate, parent, join, visibility, origin, convert, auto-smooth
Transforms 6 Position, rotation (euler/quat), scale, apply, snap
Modeling 13 Modifiers, booleans, subdivide, extrude, bevel, loop cut, bridge edge loops
Mesh Editing 16 Inset, fill, grid fill, mark seam/sharp, normals, dissolve, knife project, spin, crease
Bool Tool 4 Auto union, difference, intersect, slice (via Blender's Bool Tool addon)
Materials 15 Principled BSDF, textures, blend modes, shader node graph (add/connect/remove nodes)
Lighting 7 Point/sun/spot/area lights, HDRIs, light rigs, shadows
Camera 6 Create, aim, DOF, viewport capture, active camera
Animation 8 Keyframes, interpolation, frame range, follow path
Rendering 6 Engine, resolution, samples, output format, render
Curves 10 Bezier/NURBS/path, 3D text, convert, reverse, handle types, cyclic, subdivide
Sculpting 8 Brushes, remesh, multires, symmetry, dynamic topology
UV Mapping 4 Smart project, unwrap, projection, pack islands
Physics 9 Rigid body, cloth, fluid, particles (velocity, rendering, delete), bake
Geometry Nodes 5 Create node trees, add/connect nodes, set inputs
Armature 6 Bones, constraints, auto weights, pose
Grease Pencil 5 Create GP objects, layers, strokes with pressure/strength
Collections 4 Create, move objects, visibility, delete
File I/O 5 Import/export (FBX, OBJ, glTF, USD, STL...), save/open
Viewport 3 Shading mode, overlays, focus on object
Screenshot 1 Render viewport to file
Code Exec 1 Execute Python code in Blender

</details>

Architecture

AI Assistant <--stdio/MCP--> blend-ai server <--TCP socket--> Blender addon <--bpy--> Blender

<details> <summary><strong>How it works</strong></summary>

  • MCP Server (src/blend_ai/): Python process using the mcp SDK. Exposes tools, resources, and prompts over stdio. Validates all inputs before forwarding to Blender.
  • Blender Addon (addon/): Runs a TCP socket server inside Blender on a background thread. Commands are queued and executed on the main thread via bpy.app.timers to respect Blender's threading model.
  • Render Guard: Tracks render state via bpy.app.handlers. During renders, the server immediately returns a "busy" status instead of queueing commands that would time out. The MCP client auto-retries with backoff until the render completes.
  • Protocol: Length-prefixed JSON messages over TCP. Each message is a 4-byte big-endian length header followed by a UTF-8 JSON payload.

</details>

Privacy & Security

<details> <summary><strong>Privacy</strong></summary>

  • Zero telemetry — blend-ai collects no usage data, sends no analytics, and makes no network requests beyond the local TCP connection to Blender on 127.0.0.1:9876.
  • Fully local — all communication stays on your machine. No cloud services, no external APIs, no phone-home behavior.
  • Open source — the entire codebase is auditable. What you see is what runs.

</details>

<details> <summary><strong>Security</strong></summary>

  • Localhost only: The TCP socket binds to 127.0.0.1 — never exposed to the network.
  • Input validation: All inputs pass through validators before reaching Blender — name sanitization, path traversal prevention, numeric range checks, enum allowlists.
  • File safety: Import operations disable use_scripts_auto_execute to prevent script injection from imported files. File extensions are checked against allowlists.
  • Command allowlist: The addon dispatcher only processes explicitly registered commands. Unknown commands are rejected.
  • Shader node allowlist: Only ~65 known shader node types can be created — prevents arbitrary type injection.

</details>

Limitations

<details> <summary><strong>Known limitations</strong></summary>

  • Blender must be running: The MCP server communicates with Blender over TCP. Blender must be open with the addon enabled and server started.
  • Single connection: The addon accepts one client connection at a time. Multiple AI assistants cannot control the same Blender instance simultaneously.
  • Selection is all-or-nothing: Most mesh editing tools operate on all geometry. Fine-grained vertex/edge/face selection by index is not yet exposed, though select_linked is available.
  • Sculpt strokes cannot be simulated: You can configure brushes, symmetry, dyntopo, and remeshing, but actual brush strokes (bpy.ops.sculpt.brush_stroke) are not yet exposed — sculpting still requires manual interaction.
  • Node graphs require sequential calls: Both shader node trees and geometry node trees must be built one node/connection at a time. There's no "create full graph from description" tool.
  • No undo integration: Operations appear in Blender's undo history individually but there's no MCP-level undo/redo or transaction grouping.
  • Viewport capture: Requires a visible 3D viewport. Headless Blender may not support viewport screenshots.
  • No real-time feedback: The MCP protocol is request/response. There's no streaming of viewport updates or render progress.

</details>

Development

# Install with dev dependencies
uv pip install -e ".[dev]"

# Run tests (882 tests)
uv run --extra dev pytest

# Run tests with coverage
uv run --extra dev pytest --cov=blend_ai

# Lint
ruff check src/ tests/

# Format
ruff format src/ tests/

<details> <summary><strong>Project structure</strong></summary>

blend-ai/
├── src/blend_ai/          # MCP server
│   ├── server.py           # FastMCP entry point
│   ├── connection.py       # TCP client to Blender (with busy-retry)
│   ├── validators.py       # Input validation
│   ├── tools/              # 24 tool modules (161 tools)
│   ├── resources/          # MCP resources (scene, objects, materials)
│   └── prompts/            # Workflow prompt templates
├── addon/                  # Blender addon (zero external deps)
│   ├── __init__.py         # bl_info + register/unregister
│   ├── server.py           # TCP socket server
│   ├── dispatcher.py       # Command routing + allowlist
│   ├── thread_safety.py    # Main-thread execution queue
│   ├── render_guard.py     # Render state tracking
│   ├── ui_panel.py         # N-panel UI (start/stop)
│   └── handlers/           # 24 handler modules
└── tests/                  # 882 unit tests

</details>

License

MIT

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选