MCP ComfyUI Flux

MCP ComfyUI Flux

Enables AI image generation using FLUX models through ComfyUI with GPU acceleration, supporting image generation, 4x upscaling, and background removal with optimized Docker deployment.

Category
访问服务器

README

MCP ComfyUI Flux - Optimized Docker Solution

License: MIT Docker PyTorch CUDA

A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.

🌟 Features

  • 🚀 Optimized Performance: PyTorch 2.5.1 with native RMSNorm support
  • 📦 Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)
  • ⚡ Fast Rebuilds: BuildKit cache mounts for rapid iterations
  • 🎨 FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization
  • 🤖 MCP Integration: Works seamlessly with Claude Desktop
  • 💪 GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1
  • 🔄 Background Removal: Built-in RMBG-2.0 for transparent backgrounds
  • 📈 Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models
  • 🛡️ Production Ready: Health checks, auto-recovery, extensive logging

📋 Table of Contents

🚀 Quick Start

# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux

# Run the automated installer
./install.sh

# Or build manually with the optimized build script
./build.sh --start

# That's it! The installer will:
# - Check prerequisites
# - Configure environment
# - Download FLUX models
# - Build optimized Docker containers
# - Start all services

💻 System Requirements

Minimum Requirements

  • OS: Linux, macOS, Windows 10+ (WSL2)
  • CPU: 4 cores
  • RAM: 16GB (20GB for WSL2)
  • Storage: 50GB free space
  • Docker: 20.10+
  • Docker Compose: 2.0+ or 1.29+ (legacy)

Recommended Requirements

  • CPU: 8+ cores
  • RAM: 32GB
  • GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)
  • Storage: 100GB free space
  • CUDA: 12.1+ with NVIDIA Container Toolkit

WSL2 Specific (Windows)

# .wslconfig in Windows user directory
[wsl2]
memory=20GB
processors=8
localhostForwarding=true

📦 Installation

Prerequisites

  1. Install Docker:

    # Ubuntu/Debian
    curl -fsSL https://get.docker.com | bash
    
    # macOS
    brew install docker docker-compose
    
    # Windows - Install Docker Desktop
    
  2. Install NVIDIA Container Toolkit (for GPU):

    # Ubuntu/Debian
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
      sudo tee /etc/apt/sources.list.d/nvidia-docker.list
    sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
    sudo systemctl restart docker
    

Automated Installation

# Standard installation
./install.sh

# Non-interactive installation
./install.sh --yes

# CPU-only mode
./install.sh --cpu-only

# With specific models
./install.sh --models minimal  # or all/none/auto

# Debug mode
./install.sh --debug

Build Script Options

# Build only
./build.sh

# Build and start
./build.sh --start

# Build with cleanup
./build.sh --start --cleanup

# Rebuild without cache
./build.sh --no-cache

🎨 MCP Tools

Available Tools in Claude Desktop

1. generate_image

Generate images using FLUX schnell fp8 model (optimized defaults).

// Parameters
{
  "prompt": "a majestic mountain landscape, golden hour",  // Required
  "negative_prompt": "blurry, low quality",               // Optional
  "width": 1024,                                          // Default: 1024
  "height": 1024,                                         // Default: 1024
  "steps": 4,                                            // Default: 4 (schnell optimized)
  "cfg_scale": 1.0,                                      // Default: 1.0 (schnell optimized)
  "seed": -1,                                            // Default: -1 (random)
  "batch_size": 1                                        // Default: 1 (max: 8)
}

// Example usage
generate_image({
  prompt: "cyberpunk city at night, neon lights, detailed",
  steps: 4,
  seed: 42
})

2. upscale_image

Upscale images to 4x resolution using AI models.

// Parameters
{
  "image_path": "flux_output_00001_.png",  // Required
  "model": "ultrasharp",                   // Options: "ultrasharp", "animesharp"
  "scale_factor": 1.0,                     // Additional scaling (0.5-2.0)
  "content_type": "general"                // Auto-select model based on content
}

// Example usage
upscale_image({
  image_path: "output/my_image.png",
  model: "ultrasharp"
})

3. remove_background

Remove background using RMBG-2.0 AI model.

// Parameters
{
  "image_path": "output/image.png",  // Required
  "alpha_matting": true,              // Better edge quality (default: true)
  "output_format": "png"              // Options: "png", "webp"
}

// Example usage
remove_background({
  image_path: "flux_output_00001_.png"
})

4. check_models

Verify available models in ComfyUI.

// No parameters required
check_models()

5. connect_comfyui / disconnect_comfyui

Manage ComfyUI connection (usually auto-connects).

MCP Configuration

Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):

{
  "mcpServers": {
    "comfyui-flux": {
      "command": "wsl.exe",
      "args": [
        "bash", "-c",
        "cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
      ]
    }
  }
}

For macOS/Linux:

{
  "mcpServers": {
    "comfyui-flux": {
      "command": "docker",
      "args": [
        "exec", "-i", "mcp-comfyui-flux-mcp-server-1",
        "node", "/app/src/index.js"
      ]
    }
  }
}

🐳 Docker Management

Service Commands

# Start services
docker-compose -p mcp-comfyui-flux up -d

# Stop services
docker-compose -p mcp-comfyui-flux down

# View logs
docker-compose -p mcp-comfyui-flux logs -f
docker-compose -p mcp-comfyui-flux logs -f comfyui

# Check status
docker-compose -p mcp-comfyui-flux ps

# Restart services
docker-compose -p mcp-comfyui-flux restart

Container Access

# Access ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 bash

# Access MCP server
docker exec -it mcp-comfyui-flux-mcp-server-1 sh

# Check GPU status
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi

# Test PyTorch
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"

Health Monitoring

# Full health check
./scripts/health-check.sh

# Check ComfyUI API
curl http://localhost:8188/system_stats

# Container health status
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'

🚀 Advanced Features

Performance Optimizations

The optimized build includes:

  • PyTorch 2.5.1: Latest stable with native RMSNorm support
  • BuildKit Cache Mounts: Reduces I/O operations in WSL2
  • FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
  • Multi-stage Builds: Separates build and runtime dependencies
  • Compiled Python: Pre-compiled bytecode for faster startup

FLUX Model Configurations

Schnell (Default - Fast)

  • Steps: 4 (optimized for schnell)
  • CFG Scale: 1.0 (works best with low guidance)
  • Scheduler: simple
  • Generation Time: ~2-4 seconds per image
  • VRAM Usage: ~10GB base + 1GB per batch

Dev (High Quality)

  • Steps: 20-50
  • CFG Scale: 7.0
  • Scheduler: normal/karras
  • Requires: Hugging Face authentication
  • VRAM Usage: ~12-16GB

Batch Generation

Generate multiple images efficiently:

generate_image({
  prompt: "fantasy landscape",
  batch_size: 4  // Generates 4 variations in parallel
})

Custom Nodes

Included custom nodes:

  • ComfyUI-Manager: Node management and updates
  • ComfyUI-KJNodes: Advanced processing nodes
  • ComfyUI-RMBG: Background removal (31 nodes)

🔧 Troubleshooting

Common Issues

GPU Not Detected

# Verify NVIDIA driver
nvidia-smi

# Check Docker GPU support
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi

# Ensure NVIDIA Container Toolkit is installed
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Out of Memory

# Reduce batch size
batch_size: 1

# Use CPU mode (in .env)
CUDA_VISIBLE_DEVICES=-1

# Adjust PyTorch memory
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256

WSL2 Specific Issues

# If Docker/WSL2 crashes with I/O errors
# Avoid recursive chown on large directories
# Use the optimized Dockerfile which handles this

# Increase WSL2 memory in .wslconfig
memory=20GB

# Reset WSL2 if needed
wsl --shutdown

Port Conflicts

# Check what's using port 8188
lsof -i :8188  # macOS/Linux
netstat -ano | findstr :8188  # Windows

# Use different port
PORT=8189 docker-compose -p mcp-comfyui-flux up -d

Log Locations

  • Installation: install.log
  • Docker builds: docker-compose logs
  • ComfyUI: Inside container at /app/ComfyUI/user/comfyui.log
  • MCP Server: docker logs mcp-comfyui-flux-mcp-server-1

🏗️ Architecture

System Overview

┌─────────────────────────────────────────┐
│      Claude Desktop (MCP Client)        │
└────────────┬────────────────────────────┘
             │ docker exec stdio
┌────────────▼────────────────────────────┐
│      MCP Server Container               │
│   • Node.js 20 Alpine (581MB)          │
│   • MCP Protocol Implementation        │
│   • Auto-connects to ComfyUI           │
└────────────┬────────────────────────────┘
             │ WebSocket (port 8188)
┌────────────▼────────────────────────────┐
│      ComfyUI Container                  │
│   • Ubuntu 22.04 + CUDA 12.1           │
│   • Python 3.11 + PyTorch 2.5.1        │
│   • FLUX schnell fp8 (4.5GB)           │
│   • Custom nodes (KJNodes, RMBG)       │
│   • Optimized image size: 10.9GB       │
└─────────────────────────────────────────┘

Key Improvements

  1. Docker Optimization

    • Multi-stage builds reduce image size by 25%
    • BuildKit cache mounts speed up rebuilds
    • No Python venv (Docker IS the isolation)
  2. Model Configuration

    • FLUX schnell fp8: 4.5GB (vs 11GB fp16)
    • T5-XXL fp8: 4.9GB text encoder
    • CLIP-L: 235MB text encoder
    • VAE: 320MB decoder
  3. Performance

    • 4-step generation in 2-4 seconds
    • Batch processing up to 8 images
    • Native RMSNorm in PyTorch 2.5.1
    • High VRAM mode for 24GB+ GPUs

Directory Structure

mcp-comfyui-flux/
├── src/                    # MCP server source
│   ├── index.js           # MCP protocol handler
│   ├── comfyui-client.js  # WebSocket client
│   └── workflows/         # ComfyUI workflows
├── models/                # Model storage
│   ├── unet/             # FLUX models (fp8)
│   ├── clip/             # Text encoders
│   ├── vae/              # VAE models
│   └── upscale_models/   # Upscaling models
├── output/               # Generated images
├── scripts/              # Utility scripts
├── docker-compose.yml    # Service orchestration
├── Dockerfile.comfyui    # Optimized ComfyUI
├── Dockerfile.mcp        # MCP server
├── requirements.txt      # Python dependencies
├── build.sh             # Build script
└── install.sh           # Automated installer

🔒 Security

  • Local Execution: All processing happens locally
  • No External APIs: Except model downloads from Hugging Face
  • Container Isolation: Services run in isolated containers
  • Non-root Execution: Containers run as non-root user
  • Token Security: Stored in .env (gitignored)

📄 Additional Documentation

🤝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

📝 License

MIT License - see LICENSE file for details.

🙏 Acknowledgments


Made with ❤️ for efficient AI image generation

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选