Imagen MCP Server

Imagen MCP Server

Enables image generation using Google's Imagen and other AI models through Nexos.ai platform. Supports single and batch image generation with various quality settings and model options.

Category
访问服务器

README

Imagen MCP Server

A Model Context Protocol (MCP) server for image generation using Google's Imagen model and other models supported by the Nexos.ai platform.

Features

  • Simple Image Generation: Generate a single image from a text prompt
  • Batch Image Generation: Generate multiple images with background processing
    • First image is returned immediately
    • Remaining images are generated in the background
    • Query for additional images as they become available
  • Model Catalog: Access comprehensive information about all available models

Supported Models

Model Provider Description
imagen-4 Google Flagship model with excellent prompt following and photorealistic output
imagen-4-fast Google Faster variant optimized for speed
imagen-4-ultra Google Highest quality for premium image generation
dall-e-3 OpenAI High-quality model with excellent artistic capabilities
gpt-image-1 OpenAI Strong prompt understanding and versatile output

Installation

Option 1: Install with pipx (Recommended for CLI usage)

# Install directly from the repository
pipx install git+https://github.com/your-username/Imagen-MCP.git

# Or install from local directory
cd Imagen-MCP
pipx install .

# Run the server
imagen-mcp

Option 2: Install with Poetry (Recommended for development)

# Clone the repository
git clone <repository-url>
cd Imagen-MCP

# Install dependencies with Poetry
poetry install

# Run the server
poetry run imagen-mcp
# Or
poetry run python -m Imagen_MCP.server

Option 3: Install with pip

# Install from the repository
pip install git+https://github.com/your-username/Imagen-MCP.git

# Or install from local directory
pip install .

# Run the server
imagen-mcp

Environment Variables

Set up your Nexos.ai API key:

export NEXOS_API_KEY=your-api-key-here

Or create a .env file:

NEXOS_API_KEY=your-api-key-here

Usage

Running the Server

# If installed with pipx or pip
imagen-mcp

# If using Poetry (development)
poetry run imagen-mcp

# Alternative: run as Python module
poetry run python -m Imagen_MCP.server

# With FastMCP CLI (more options)
poetry run fastmcp run Imagen_MCP/server.py --transport http --port 8000

CLI Options

When using the fastmcp run command, you have additional options:

Option Description
--transport, -t Transport protocol: stdio (default), http, sse, streamable-http
--host Host to bind to (default: 127.0.0.1)
--port, -p Port for HTTP/SSE transport (default: 8000)
--log-level, -l Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL
--no-banner Don't show the server banner

MCP Client Configuration

To use this MCP server with an AI agent, add the following configuration to your MCP client.

Claude Desktop (pipx installation)

If you installed with pipx, add to your Claude Desktop configuration file (~/.config/claude/claude_desktop_config.json on Linux, ~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "imagen": {
      "command": "imagen-mcp",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Claude Desktop (Poetry installation)

If you're using Poetry for development:

{
  "mcpServers": {
    "imagen": {
      "command": "poetry",
      "args": ["run", "imagen-mcp"],
      "cwd": "/path/to/Imagen-MCP",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Cline / Roo Code

Add to your VS Code settings or Cline MCP configuration:

{
  "mcpServers": {
    "imagen": {
      "command": "imagen-mcp",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Generic MCP Client (Copy-Paste Ready)

For pipx/pip installation:

{
  "imagen": {
    "command": "imagen-mcp",
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

For Poetry installation:

{
  "imagen": {
    "command": "poetry",
    "args": ["run", "imagen-mcp"],
    "cwd": "/path/to/Imagen-MCP",
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

Configuration Options:

Field Description
command The command to run (poetry for Poetry-managed projects)
args Command arguments to start the MCP server
cwd Working directory - set to your Imagen-MCP installation path
env Environment variables, including the required NEXOS_API_KEY

Important: Replace /path/to/Imagen-MCP with the actual path to your Imagen-MCP installation and your-nexos-api-key-here with your Nexos.ai API key.

Alternative: Using pip-installed package

If you install the package globally or in a virtual environment:

{
  "imagen": {
    "command": "python",
    "args": ["-m", "Imagen_MCP.server"],
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

Tools

list_models

List all available image generation models with their descriptions, capabilities, and use cases.

Parameters: None

Returns:

  • models: List of all available models with details
  • total_count: Number of available models
  • default_model: The default model ID
  • usage_hint: How to use the model parameter

Example Response:

{
  "models": [
    {
      "id": "imagen-4",
      "name": "Imagen 4",
      "provider": "Google",
      "description": "Google's flagship image generation model...",
      "use_cases": ["Photorealistic image generation", ...],
      "strengths": ["Excellent prompt adherence", ...],
      "weaknesses": ["Slower generation time", ...],
      "supported_sizes": ["256x256", "512x512", "1024x1024", ...],
      "max_images_per_request": 4,
      "supports_hd_quality": true,
      "rate_limit": "100 messages per 3 hours"
    },
    ...
  ],
  "total_count": 5,
  "default_model": "imagen-4"
}

get_model_details

Get detailed information about a specific image generation model.

Parameters:

  • model_id (required): The model identifier (e.g., "imagen-4", "imagen-4-fast", "dall-e-3")

Returns:

  • Complete model details including capabilities, rate limits, use cases, strengths, and weaknesses
  • Error message if model not found

Example:

result = get_model_details(model_id="imagen-4-fast")

generate_image

Generate a single image from a text prompt. The image is saved to a file (temporary file if no path specified).

Parameters:

  • prompt (required): Text description of the image to generate
  • model (optional): Model to use (default: "imagen-4")
  • size (optional): Image size (default: "1024x1024")
  • quality (optional): Image quality - "standard" or "hd" (default: "standard")
  • style (optional): Image style - "vivid" or "natural" (default: "vivid")

Returns:

  • success: Whether the image was generated successfully
  • file_path: Absolute path to the saved image file
  • file_size_bytes: Size of the saved image file in bytes
  • model_used: The model that was used for generation
  • revised_prompt: The revised prompt (if the model modified it)
  • error: Error message if generation failed

Example:

result = await generate_image(
    prompt="A serene mountain landscape at sunset",
    model="imagen-4",
    size="1024x1024",
    quality="hd",
    style="natural"
)
if result.success:
    print(f"Image saved to: {result.file_path}")
    print(f"File size: {result.file_size_bytes} bytes")

start_image_batch

Start generating multiple images and return the first one immediately. Images are saved to files (in a temporary directory if no path specified).

Parameters:

  • prompt (required): Text description of the image to generate
  • count (optional): Number of images to generate, 2-10 (default: 4)
  • model (optional): Model to use (default: "imagen-4")
  • size (optional): Image size (default: "1024x1024")
  • quality (optional): Image quality (default: "standard")
  • style (optional): Image style (default: "vivid")

Returns:

  • success: Whether the batch was started successfully
  • session_id: ID for retrieving more images
  • first_image_path: Path to the first generated image file
  • first_image_size_bytes: Size of the first image file in bytes
  • pending_count: Number of images still being generated
  • error: Error message if batch failed to start

Example:

result = await start_image_batch(
    prompt="A futuristic cityscape",
    count=5,
    model="imagen-4"
)
if result.success:
    print(f"Session ID: {result.session_id}")
    print(f"First image: {result.first_image_path}")

get_next_image

Get the next available image from a batch generation session. The image is saved to a file (temporary file if no path specified).

Parameters:

  • session_id (required): Session ID from start_image_batch
  • timeout (optional): Maximum wait time in seconds (default: 60)

Returns:

  • success: Whether an image was retrieved
  • file_path: Path to the saved image file (or null if no image available)
  • file_size_bytes: Size of the saved image file in bytes
  • has_more: Whether more images are available or pending
  • pending_count: Number of images still being generated
  • error: Error message if retrieval failed

Example:

while True:
    result = await get_next_image(session_id=session_id)
    if result.file_path:
        print(f"Image saved to: {result.file_path}")
    if not result.has_more:
        break

get_batch_status

Get the current status of a batch generation session.

Parameters:

  • session_id (required): Session ID from start_image_batch

Returns:

  • status: Session status (created, generating, partial, completed, failed)
  • completed_count: Number of completed images
  • pending_count: Number of pending images
  • total_count: Total number of requested images
  • errors: List of any errors encountered

Resources

models://image-generation

Get the complete catalog of available image generation models with their capabilities, rate limits, use cases, strengths, and weaknesses.

models://image-generation/{model_id}

Get detailed information about a specific model.

Development

Running Tests

# Run all tests
poetry run pytest

# Run with verbose output
poetry run pytest -v

# Run specific test file
poetry run pytest tests/unit/test_generate_image.py

Project Structure

Imagen_MCP/
├── __init__.py              # Package exports
├── server.py                # FastMCP server definition
├── config.py                # Configuration management
├── constants.py             # Constants and type definitions
├── exceptions.py            # Custom exceptions
├── tools/
│   ├── generate_image.py    # Simple image generation tool
│   └── batch_generate.py    # Batch generation tools
├── resources/
│   └── models.py            # Model catalog resource
├── services/
│   ├── nexos_client.py      # Nexos.ai API client
│   ├── session_manager.py   # Background generation session manager
│   └── model_registry.py    # Model information registry
└── models/
    ├── image.py             # Image data models
    ├── generation.py        # Generation request/response models
    └── session.py           # Session state models

Rate Limits

All models are in Category 3 on Nexos.ai:

  • 100 messages per 3 hours

License

MIT License

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选