NanoBananaPro
Use Nano Banana Pro to generate image from text prompt and edit image
README
<h1>genai-mcp: GenAI MCP Server for Image Generation(eg. Nano Banana)</h1>
GenAI MCP Server
This project implements a Model Context Protocol (MCP) server for image generation and image editing using Google Gemini (via google.golang.org/genai) and Tongyi Wanxiang (Ali Bailian) image APIs, plus optional automatic upload of generated images to S3‑compatible object storage (AWS S3, Aliyun OSS, etc.).
The server exposes a streamable HTTP MCP endpoint and provides tools for Gemini and Wan:
gemini_generate_image– text → imagegemini_edit_image– image + text → edited image
Gemini / Nano Banana backend support
This MCP server currently supports the following Gemini‑compatible backends:
-
Google official Gemini API
- Use the default
GENAI_BASE_URL=https://generativelanguage.googleapis.com GENAI_API_KEYis a Google Gemini API key
- Use the default
-
dmxapi (Gemini‑compatible third‑party gateway)
- Set
GENAI_BASE_URLto the dmxapi Gemini endpoint (for examplehttps://www.dmxapi.cn) GENAI_API_KEYis the key issued by dmxapi- As long as the endpoint implements the
google.golang.org/genaicompatible Gemini API, no code changes are needed
- Set
Tongyi Wanxiang (Ali Bailian) backend support
When GENAI_PROVIDER=wan, the server will use Ali Bailian Tongyi Wanxiang image APIs (via DashScope) instead of Gemini:
- Set:
GENAI_PROVIDER=wanGENAI_BASE_URL=https://dashscope.aliyuncs.comGENAI_API_KEY=<your DashScope API key>GENAI_GEN_MODEL_NAME=wan2.5-t2i-preview(text → image)GENAI_EDIT_MODEL_NAME=wan2.5-i2i-preview(image → image)
- Wan provides a separate MCP tool set (see
internal/tools/wan.go):wan_create_generate_image_taskwan_query_generate_image_taskwan_create_edit_image_taskwan_query_edit_image_task
The Python test client in tests/mcp_client.py will automatically route calls to Gemini or Wan based on GENAI_PROVIDER (gemini by default, wan for Tongyi Wanxiang).
1. Prerequisites
- Go 1.21+ (recommended;
go.moduses module mode) - A valid Gemini API key
- Optional: S3 / OSS bucket for storing images
2. Configuration (.env)
Copy env.example to .env, then fill in real values.
GenAI configuration
# GenAI provider:
# - gemini: Google Gemini / compatible backend
# - wan: Ali Bailian Tongyi Wanxiang image APIs
GENAI_PROVIDER=gemini
# Shared GenAI endpoint / key for both providers
GENAI_BASE_URL=https://generativelanguage.googleapis.com
GENAI_API_KEY=your_api_key_here
# Model names:
# - When GENAI_PROVIDER=gemini: Gemini model names, e.g. gemini-3-pro-image-preview
# - When GENAI_PROVIDER=wan: Wanxiang model names, e.g. wan2.5-t2i-preview / wan2.5-i2i-preview
GENAI_GEN_MODEL_NAME=gemini-3-pro-image-preview
GENAI_EDIT_MODEL_NAME=gemini-3-pro-image-preview
# Request timeout in seconds for each GenAI call (generate / edit)
GENAI_TIMEOUT_SECONDS=120
# Image output format:
# - base64: return image as data URI (base64 encoded)
# - url: upload image to OSS and return plain URL
GENAI_IMAGE_FORMAT=base64
HTTP server
SERVER_ADDRESS=0.0.0.0
SERVER_PORT=8080
MCP endpoint will listen on:
http://SERVER_ADDRESS:SERVER_PORT/mcp
OSS / S3 configuration (optional, required when GENAI_IMAGE_FORMAT=url)
# For AWS S3: leave OSS_ENDPOINT empty or set to s3.amazonaws.com
# For Aliyun OSS: set to oss-cn-hangzhou.aliyuncs.com or your region
# For Tencent COS: set to cos.ap-guangzhou.myqcloud.com
# For MinIO: set to your MinIO endpoint
OSS_ENDPOINT=
OSS_REGION=us-east-1
OSS_ACCESS_KEY=your_access_key_here
OSS_SECRET_KEY=your_secret_key_here
OSS_BUCKET=your_bucket_name
When GENAI_IMAGE_FORMAT=url:
- For Aliyun OSS: make sure
OSS_ENDPOINTis likeoss-cn-beijing.aliyuncs.com- The bucket policy allows read access if you expect the returned URL to be publicly accessible
3. Running the MCP Server
You can run the MCP server in two ways:
-
Clone & build from source
-
Clone this repo and enter the project root
-
Copy
env.exampleto.envand fill in your configuration -
Run:
go build . ./genai-mcp
-
-
Download release binary
-
Download the appropriate binary from the Releases page
-
Place it in a directory of your choice
-
Copy
env.examplefrom this repo (or from the release asset) to.envin the same directory and update configuration -
Run (binary name may vary by platform):
./genai-mcp
-
By default the MCP HTTP endpoint will be:
http://127.0.0.1:8080/mcp
You can connect to this MCP endpoint from any MCP‑compatible client (e.g. Code editors or tools that support the streamable HTTP MCP transport).
4. MCP Tools
The server registers two tools in internal/tools/gemini.go:
-
gemini_generate_image- Input:
prompt(string, required): text prompt describing the image
- Output:
- When
GENAI_IMAGE_FORMAT=base64: adata:image/...;base64,...string - When
GENAI_IMAGE_FORMAT=url: an OSS/S3 URL generated by the server
- When
- Input:
-
gemini_edit_image- Input:
prompt(string, required): how to edit the imageimage_url(string, required): original image URL or data URI
- Output:
- Same format as above (
base64orurl), depending on configuration
- Same format as above (
- Input:
When GENAI_IMAGE_FORMAT=url:
- Generated / edited images are:
- Downloaded (if Gemini returns a URL), or decoded (if it returns inline data)
- Re‑uploaded to OSS / S3
- Stored under key pattern:
images/yyyy-MM-dd/{uuid_timestamp_random}.ext
5. Contact
-
WeChat: Scan the QR code below to add as a friend

-
Discord: Username
adamydwang
Star History
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。