MCP OpenAI Image Generation Server
Enables AI assistants to generate and edit images through OpenAI's DALL-E models via MCP tools. Supports text-to-image generation and image-to-image editing with configurable parameters for size, quality, and style.
README
MCP OpenAI Image Generation Server
🚀 零安装配置! 直接在MCP客户端中使用,无需任何预安装步骤
{ "mcpServers": { "imagegen-mcp": { "command": "npx", "args": ["@lupinlin1/imagegen-mcp", "--models", "dall-e-3"], "env": { "OPENAI_API_KEY": "your_api_key" } } } }
This project provides a server implementation based on the Model Context Protocol (MCP) that acts as a wrapper around OpenAI's Image Generation and Editing APIs (see OpenAI documentation).
Features
- Exposes OpenAI image generation capabilities through MCP tools.
- Supports
text-to-imagegeneration using models like DALL-E 2, DALL-E 3, and gpt-image-1 (if available/enabled). - Supports
image-to-imageediting using DALL-E 2 and gpt-image-1 (if available/enabled). - Configurable via environment variables and command-line arguments.
- Handles various parameters like size, quality, style, format, etc.
- Saves generated/edited images to temporary files and returns the path along with the base64 data.
Here's an example of generating an image directly in Cursor using the text-to-image tool integrated via MCP:
<div align="center"> <img src="https://raw.githubusercontent.com/spartanz51/imagegen-mcp/refs/heads/main/cursor.gif" alt="Example usage in Cursor" width="600"/> </div>
🚀 安装方式
🎯 零安装配置 (推荐)
方式1: NPX自动下载 (需要NPM发布)
npm install -g @lupinlin1/imagegen-mcp
方式2: GitHub远程执行 (立即可用)
# 一行命令安装脚本
curl -fsSL https://raw.githubusercontent.com/LupinLin1/imagegen-mcp/main/scripts/install.sh | bash
方式3: 本地脚本 (开发者友好)
git clone https://github.com/LupinLin1/imagegen-mcp.git
cd imagegen-mcp
npm install && npm run build
📊 方案对比
| 方式 | 安装步骤 | 网络依赖 | 启动速度 | 适用场景 |
|---|---|---|---|---|
| NPX自动下载 | 0步 | 首次需要 | 快 | 生产环境 |
| GitHub远程 | 0步 | 每次需要 | 中等 | 快速试用 |
| 本地脚本 | 1步克隆 | 无 | 最快 | 开发测试 |
📁 更多配置: 查看 examples/mcp-configs/ 获取所有配置示例
Prerequisites
- Node.js (v18 or later recommended)
- npm or yarn
- An OpenAI API key
🎯 零安装配置 (推荐)
**无需任何预安装步骤!**直接配置即可使用:
Cursor 编辑器
{
"mcpServers": {
"imagegen-mcp": {
"command": "npx",
"args": ["@lupinlin1/imagegen-mcp", "--models", "dall-e-3"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here"
}
}
}
}
Claude Desktop
{
"mcpServers": {
"imagegen-mcp": {
"command": "npx",
"args": ["@lupinlin1/imagegen-mcp"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here"
}
}
}
}
💡 零安装原理
- ✅ 首次运行:
npx自动下载并缓存包 - ✅ 后续启动: 使用缓存,启动快速
- ✅ 自动更新: 始终使用最新版本
- ✅ 无污染: 不会全局安装任何包
📁 更多配置示例: 查看 examples/mcp-configs/ 目录
Setup
-
Clone the repository:
git clone <your-repository-url> cd <repository-directory> -
Install dependencies:
npm install # or yarn install -
Configure Environment Variables: Create a
.envfile in the project root by copying the example:cp .env.example .envEdit the
.envfile and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key_here
Building
To build the TypeScript code into JavaScript:
npm run build
# or
yarn build
This will compile the code into the dist directory.
Running the Server
This section provides details on running the server locally after cloning and setup. For a quick start without cloning, see the Quick Run with npx section.
Using ts-node (for development):
npx ts-node src/index.ts [options]
Using the compiled code:
node dist/index.js [options]
Options:
--models <model1> <model2> ...: Specify which OpenAI models the server should allow. If not provided, it defaults to allowing all models defined insrc/libs/openaiImageClient.ts(currently gpt-image-1, dall-e-2, dall-e-3).- Example using
npx(also works for local runs):... --models gpt-image-1 dall-e-3 - Example after cloning:
node dist/index.js --models dall-e-3 dall-e-2
- Example using
The server will start and listen for MCP requests via standard input/output (using StdioServerTransport).
MCP Tools
The server exposes the following MCP tools:
text-to-image
Generates an image based on a text prompt.
Parameters:
text(string, required): The prompt to generate an image from.model(enum, optional): The model to use (e.g.,gpt-image-1,dall-e-2,dall-e-3). Defaults to the first allowed model.size(enum, optional): Size of the generated image (e.g.,1024x1024,1792x1024). Defaults to1024x1024. Check OpenAI documentation for model-specific size support.style(enum, optional): Style of the image (vividornatural). Only applicable todall-e-3. Defaults tovivid.output_format(enum, optional): Format (png,jpeg,webp). Defaults topng.output_compression(number, optional): Compression level (0-100). Defaults to 100.moderation(enum, optional): Moderation level (low,auto). Defaults tolow.background(enum, optional): Background (transparent,opaque,auto). Defaults toauto.transparentrequiresoutput_formatto bepngorwebp.quality(enum, optional): Quality (standard,hd,auto, ...). Defaults toauto.hdonly applicable todall-e-3.n(number, optional): Number of images to generate. Defaults to 1. Note:dall-e-3only supportsn=1.
Returns:
content: An array containing:- A
textobject containing the path to the saved temporary image file (e.g.,/tmp/uuid.png).
- A
image-to-image
Edits an existing image based on a text prompt and optional mask.
Parameters:
images(string, required): An array of file paths to local images.prompt(string, required): A text description of the desired edits.mask(string, optional): A file path of mask image (PNG). Transparent areas indicate where the image should be edited.model(enum, optional): The model to use. Onlygpt-image-1anddall-e-2are supported for editing. Defaults to the first allowed model.size(enum, optional): Size of the generated image (e.g.,1024x1024). Defaults to1024x1024.dall-e-2only supports256x256,512x512,1024x1024.output_format(enum, optional): Format (png,jpeg,webp). Defaults topng.output_compression(number, optional): Compression level (0-100). Defaults to 100.quality(enum, optional): Quality (standard,hd,auto, ...). Defaults toauto.n(number, optional): Number of images to generate. Defaults to 1.
Returns:
content: An array containing:- A
textobject containing the path to the saved temporary image file (e.g.,/tmp/uuid.png).
- A
Development
- Linting:
npm run lintoryarn lint - Formatting:
npm run formatoryarn format(if configured inpackage.json)
Contributing
Pull Requests (PRs) are welcome! Please feel free to submit improvements or bug fixes.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。