Nano Banana MCP Server (CLIProxyAPI Edition)
Provides AI-powered image generation through Google's Gemini models (Flash and Pro) with intelligent model selection, aspect ratio control, and support for both direct Gemini API and CLIProxyAPI proxy routing.
README
Nano Banana MCP Server (CLIProxyAPI Edition) 🍌
A production-ready Model Context Protocol (MCP) server that provides AI-powered image generation capabilities through Google's Gemini models with intelligent model selection.
Downstream fork of
zhongweili/nanobanana-mcp-server: https://github.com/zhongweili/nanobanana-mcp-server
CLIProxyAPI docs: https://help.router-for.me/cn/introduction/what-is-cliproxyapi.html
Updates in this fork: CLIProxyAPI backend support (Gemini-compatible proxy), updated docs/config examples.
⭐ NEW: Gemini 3 Pro Image Support! 🚀
Now featuring Nano Banana Pro - Google's latest and most powerful image generation model:
- 🏆 Professional 4K Quality: Generate stunning images up to 3840px resolution
- 🌐 Google Search Grounding: Access real-world knowledge for factually accurate images
- 🧠 Advanced Reasoning: Configurable thinking levels for complex compositions
- 🎯 Superior Text Rendering: Crystal-clear text in images at high resolution
- 🎨 Enhanced Understanding: Better context comprehension for complex prompts
Upstream MCP registry listing: <a href="https://glama.ai/mcp/servers/@zhongweili/nanobanana-mcp-server"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@zhongweili/nanobanana-mcp-server/badge" alt="nanobanana-mcp-server MCP server" /> </a>
✨ Features
- 🎨 Multi-Model AI Image Generation: Intelligent selection between Flash (speed) and Pro (quality) models
- ⚡ Gemini 2.5 Flash Image: Fast generation (1024px) for rapid prototyping
- 🏆 Gemini 3 Pro Image: High-quality up to 4K with Google Search grounding
- 🤖 Smart Model Selection: Automatically chooses optimal model based on your prompt
- 📐 Aspect Ratio Control ⭐ NEW: Specify output dimensions (1:1, 16:9, 9:16, 21:9, and more)
- 📋 Smart Templates: Pre-built prompt templates for photography, design, and editing
- 📁 File Management: Upload and manage files via Gemini Files API
- 🔍 Resource Discovery: Browse templates and file metadata through MCP resources
- 🛡️ Production Ready: Comprehensive error handling, logging, and validation
- ⚡ High Performance: Optimized architecture with intelligent caching
🚀 Quick Start
Prerequisites
- One of the following:
- Google Gemini API Key - Get one free here
- CLIProxyAPI running locally (Gemini-compatible proxy)
- Python 3.11+ (for development only)
Installation
Option 1: Upstream MCP Registry (Gemini direct only) The upstream package is listed in the Model Context Protocol Registry.
mcp-name: io.github.zhongweili/nanobanana-mcp-server
Option 2: From Source (this fork, recommended for CLIProxyAPI)
git clone https://github.com/ion-aluminium/nanobanana-mcp-cliproxyapi.git
cd nanobanana-mcp-cliproxyapi
uv sync
Option 3: Install from Git (this fork)
pip install git+https://github.com/ion-aluminium/nanobanana-mcp-cliproxyapi.git
🔧 Configuration
Authentication Methods
Nano Banana supports these authentication options:
- API Key (
api_key): UsesGEMINI_API_KEY. Best for local development and simple deployments. - Vertex AI ADC (
vertex_ai): Uses Google Cloud Application Default Credentials. Best for production on Google Cloud (Cloud Run, GKE, GCE). - Automatic (
auto): Defaults to API Key if present, otherwise tries Vertex AI. - CLIProxyAPI: Set
CLIPROXY_BASE_URLto route requests through a local Gemini-compatible proxy (bypasses Google SDK auth).
1. API Key Authentication (Default)
Set GEMINI_API_KEY environment variable.
2. Vertex AI Authentication (Google Cloud)
Required environment variables:
NANOBANANA_AUTH_METHOD=vertex_ai(orauto)GCP_PROJECT_ID=your-project-idGCP_REGION=us-central1(default)
Prerequisites:
- Enable Vertex AI API:
gcloud services enable aiplatform.googleapis.com - Grant IAM Role:
roles/aiplatform.userto the service account.
Claude Desktop
Option 1: CLIProxyAPI (this fork)
Add to your claude_desktop_config.json:
{
"mcpServers": {
"nanobanana": {
"command": "python3",
"args": ["-m", "nanobanana_mcp_server.server"],
"env": {
"CLIPROXY_BASE_URL": "http://127.0.0.1:8318",
"CLIPROXY_API_KEY": "sk-your-cli-proxy-key",
"NANOBANANA_MODEL": "pro",
"NO_PROXY": "127.0.0.1,localhost"
}
}
}
}
If you are running from source, you can use
uv run python -m nanobanana_mcp_server.serverand setcwdto your local repository.
Option 2: Upstream PyPI (Gemini direct only)
{
"mcpServers": {
"nanobanana": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Option 3: Using Vertex AI (ADC) (Upstream only)
To authenticate with Google Cloud Application Default Credentials (instead of an API Key):
{
"mcpServers": {
"nanobanana-adc": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"NANOBANANA_AUTH_METHOD": "vertex_ai",
"GCP_PROJECT_ID": "your-project-id",
"GCP_REGION": "us-central1"
}
}
}
}
Configuration file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Claude Code (VS Code Extension)
Install and configure in VS Code:
-
Install the Claude Code extension
-
Open Command Palette (
Cmd/Ctrl + Shift + P) -
Run "Claude Code: Add MCP Server"
-
Configure:
{ "name": "nanobanana", "command": "python3", "args": ["-m", "nanobanana_mcp_server.server"], "env": { "CLIPROXY_BASE_URL": "http://127.0.0.1:8318", "CLIPROXY_API_KEY": "sk-your-cli-proxy-key", "NANOBANANA_MODEL": "pro", "NO_PROXY": "127.0.0.1,localhost" } }Upstream PyPI (Gemini direct) uses
uvx nanobanana-mcp-server@latestandGEMINI_API_KEY.
Cursor
Add to Cursor's MCP configuration:
{
"mcpServers": {
"nanobanana": {
"command": "python3",
"args": ["-m", "nanobanana_mcp_server.server"],
"env": {
"CLIPROXY_BASE_URL": "http://127.0.0.1:8318",
"CLIPROXY_API_KEY": "sk-your-cli-proxy-key",
"NANOBANANA_MODEL": "pro",
"NO_PROXY": "127.0.0.1,localhost"
}
}
}
}
Upstream PyPI (Gemini direct) uses
uvx nanobanana-mcp-server@latestandGEMINI_API_KEY.
Continue.dev (VS Code/JetBrains)
Add to your config.json:
{
"mcpServers": [
{
"name": "nanobanana",
"command": "python3",
"args": ["-m", "nanobanana_mcp_server.server"],
"env": {
"CLIPROXY_BASE_URL": "http://127.0.0.1:8318",
"CLIPROXY_API_KEY": "sk-your-cli-proxy-key",
"NANOBANANA_MODEL": "pro",
"NO_PROXY": "127.0.0.1,localhost"
}
}
]
}
Upstream PyPI (Gemini direct) uses
uvx nanobanana-mcp-server@latestandGEMINI_API_KEY.
Open WebUI
Configure in Open WebUI settings:
{
"mcp_servers": {
"nanobanana": {
"command": ["python3", "-m", "nanobanana_mcp_server.server"],
"env": {
"CLIPROXY_BASE_URL": "http://127.0.0.1:8318",
"CLIPROXY_API_KEY": "sk-your-cli-proxy-key",
"NANOBANANA_MODEL": "pro",
"NO_PROXY": "127.0.0.1,localhost"
}
}
}
}
Upstream PyPI (Gemini direct) uses
["uvx", "nanobanana-mcp-server@latest"]andGEMINI_API_KEY.
Gemini CLI / Generic MCP Client
# CLIProxyAPI (this fork)
export CLIPROXY_BASE_URL="http://127.0.0.1:8318"
export CLIPROXY_API_KEY="sk-your-cli-proxy-key"
export NANOBANANA_MODEL="pro"
export NO_PROXY="127.0.0.1,localhost"
python3 -m nanobanana_mcp_server.server
# Upstream PyPI (Gemini direct)
export GEMINI_API_KEY="your-gemini-api-key-here"
uvx nanobanana-mcp-server@latest
CLIProxyAPI Mode (Gemini-Compatible Proxy) ⭐ NEW
Set the following environment variables and run as usual:
export CLIPROXY_BASE_URL="http://127.0.0.1:8318"
export CLIPROXY_API_KEY="sk-your-cli-proxy-key"
export NANOBANANA_MODEL="pro"
export NO_PROXY="127.0.0.1,localhost"
uv run python -m nanobanana_mcp_server.server
See the full guide in docs/CLIPROXYAPI.md.
🤖 Model Selection
Nano Banana supports two Gemini models with intelligent automatic selection:
🏆 Pro Model - Nano Banana Pro (Gemini 3 Pro Image) ⭐ NEW!
Google's latest and most advanced image generation model
- Quality: Professional-grade, production-ready
- Resolution: Up to 4K (3840px) - highest available
- Speed: ~5-8 seconds per image
- Special Features:
- 🌐 Google Search Grounding: Leverages real-world knowledge for accurate, contextual images
- 🧠 Advanced Reasoning: Configurable thinking levels (LOW/HIGH) for complex compositions
- 📐 Media Resolution Control: Fine-tune vision processing detail (LOW/MEDIUM/HIGH/AUTO)
- 📝 Superior Text Rendering: Exceptional clarity for text-in-image generation
- 🎨 Enhanced Context Understanding: Better interpretation of complex, narrative prompts
- Best for: Production assets, marketing materials, professional photography, high-fidelity outputs, images requiring text, factual accuracy
- Cost: Higher per image (premium quality)
⚡ Flash Model (Gemini 2.5 Flash Image)
Fast, reliable model for rapid iteration
- Speed: Very fast (2-3 seconds)
- Resolution: Up to 1024px
- Quality: High quality for everyday use
- Best for: Rapid prototyping, iterations, high-volume generation, drafts, sketches
- Cost: Lower per image
🤖 Automatic Selection (Recommended)
By default, the server uses AUTO mode which intelligently analyzes your prompt and requirements:
Pro Model Selected When:
- Quality keywords detected: "4K", "professional", "production", "high-res", "HD"
- High resolution requested:
resolution="4k"orresolution="high" - Google Search grounding enabled:
enable_grounding=True - High thinking level requested:
thinking_level="HIGH" - Multi-image conditioning with multiple input images
Flash Model Selected When:
- Speed keywords detected: "quick", "draft", "sketch", "rapid"
- High-volume batch generation:
n > 2 - Standard or lower resolution requested
- No special Pro features required
Usage Examples
# Automatic selection (recommended)
"Generate a professional 4K product photo" # → Pro model (quality keywords + 4K)
"Quick sketch of a cat" # → Flash model (speed keyword)
"Create a diagram with clear text labels" # → Pro model (text rendering)
"Draft mockup for website hero section" # → Flash model (draft keyword)
# Explicit model selection
generate_image(
prompt="A scenic landscape",
model_tier="flash" # Force Flash model for speed
)
# Leverage Nano Banana Pro features
generate_image(
prompt="Professional product photo of vintage camera on wooden desk",
model_tier="pro", # Use Pro model
resolution="4k", # 4K resolution (Pro-only)
thinking_level="HIGH", # Enhanced reasoning
enable_grounding=True, # Use Google Search for accuracy
media_resolution="HIGH" # High-detail vision processing
)
# Pro model for high-quality text rendering
generate_image(
prompt="Infographic showing 2024 market statistics with clear labels",
model_tier="pro", # Pro excels at text rendering
resolution="4k" # Maximum clarity for text
)
# Control aspect ratio for different formats ⭐ NEW!
generate_image(
prompt="Cinematic landscape at sunset",
aspect_ratio="21:9" # Ultra-wide cinematic format
)
generate_image(
prompt="Instagram post about coffee",
aspect_ratio="1:1" # Square format for social media
)
generate_image(
prompt="YouTube thumbnail design",
aspect_ratio="16:9" # Standard video format
)
generate_image(
prompt="Mobile wallpaper of mountain vista",
aspect_ratio="9:16" # Portrait format for phones
)
📐 Aspect Ratio Control ⭐ NEW!
Control the output image dimensions with the aspect_ratio parameter:
Supported Aspect Ratios:
1:1- Square (Instagram, profile pictures)4:3- Classic photo format3:4- Portrait orientation16:9- Widescreen (YouTube thumbnails, presentations)9:16- Mobile portrait (phone wallpapers, stories)21:9- Ultra-wide cinematic2:3,3:2,4:5,5:4- Various photo formats
# Examples for different use cases
generate_image(
prompt="Product showcase for e-commerce",
aspect_ratio="3:4", # Portrait format, good for product pages
model_tier="pro"
)
generate_image(
prompt="Social media banner for Facebook",
aspect_ratio="16:9" # Landscape banner format
)
Note: Aspect ratio works with both Flash and Pro models. For best results with specific aspect ratios at high resolution, use the Pro model with resolution="4k".
⚙️ Environment Variables
Configuration options:
# Authentication (Required)
# Method 1: API Key
GEMINI_API_KEY=your-gemini-api-key-here
# Method 2: Vertex AI (Google Cloud)
NANOBANANA_AUTH_METHOD=vertex_ai
GCP_PROJECT_ID=your-project-id
GCP_REGION=us-central1
# Method 3: CLIProxyAPI (local OAuth/proxy)
# When CLIPROXY_BASE_URL is set, the server uses CLIProxyAPI instead of Google SDK.
CLIPROXY_BASE_URL=http://127.0.0.1:8318
# Provide API key directly or point to CLIProxyAPI config.yaml
CLIPROXY_API_KEY=sk-your-cli-proxy-key
# or:
CLIPROXY_CONFIG=/root/cliproxyapi/config.yaml
# Model Selection (optional)
NANOBANANA_MODEL=auto # Options: flash, pro, auto (default: auto)
# Optional
IMAGE_OUTPUT_DIR=/path/to/image/directory # Default: ~/nanobanana-images
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
LOG_FORMAT=standard # standard, json, detailed
CLIProxyAPI mode notes
- Uses CLIProxyAPI's Gemini-compatible
/v1beta/models/{model}:generateContent. - Gemini Files API features are not available (file_id upload / retrieval disabled).
- Local file editing (
input_image_path_*) still works.
🐛 Troubleshooting
Common Issues
"GEMINI_API_KEY not set"
- Add your API key to the MCP server configuration in your client
- Get a free API key at Google AI Studio
"Server failed to start"
- Ensure you're using the latest version:
uvx nanobanana-mcp-server@latest - Check that your client supports MCP (Claude Desktop 0.10.0+)
"Permission denied" errors
- The server creates images in
~/nanobanana-imagesby default - Ensure write permissions to your home directory
Development Setup
For local development:
# Clone repository
git clone https://github.com/ion-aluminium/nanobanana-mcp-cliproxyapi.git
cd nanobanana-mcp-cliproxyapi
# Install with uv
uv sync
# Set environment
export GEMINI_API_KEY=your-api-key-here
# Run locally
uv run python -m nanobanana_mcp_server.server
📄 License
MIT License - see LICENSE for details.
🆘 Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Upstream: zhongweili/nanobanana-mcp-server
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。