Gemini MCP Server
Enables multi-turn conversations with Google Gemini AI models, supporting file and image analysis, automatic model selection, deep thinking mode, and Google Search integration through the AIStudioProxyAPI backend.
README
Gemini MCP Server
A Model Context Protocol (MCP) server that provides Google Gemini AI capabilities to MCP-compatible clients like Claude Desktop and Claude Code.
Overview
This MCP server acts as a bridge between MCP clients and Google Gemini models, enabling:
- Multi-turn conversations with session management
- File and image analysis with glob pattern support
- Automatic model selection based on content length
- Deep thinking mode with reasoning output
- Google Search integration for up-to-date information
Prerequisites
1. AIStudioProxyAPI Backend
This MCP server requires AIStudioProxyAPI as the backend service.
# Clone and setup AIStudioProxyAPI
git clone https://github.com/CJackHwang/AIstudioProxyAPI.git
cd AIstudioProxyAPI
poetry install
poetry run python launch_camoufox.py --headless
The API will be available at http://127.0.0.1:2048 by default.
2. uv Package Manager
# Install uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
Installation
# Clone this repository
git clone https://github.com/YOUR_USERNAME/aistudio-gemini-mcp.git
cd aistudio-gemini-mcp
# Install dependencies
uv sync
Configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
GEMINI_API_BASE_URL |
http://127.0.0.1:2048 |
AIStudioProxyAPI endpoint |
GEMINI_API_KEY |
(empty) | Optional API key |
GEMINI_PROJECT_ROOT |
$PWD |
Root directory for file resolution |
Claude Desktop / Claude Code
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"gemini": {
"command": "uv",
"args": ["run", "--directory", "/path/to/aistudio-gemini-mcp", "python", "server.py"],
"env": {
"GEMINI_API_BASE_URL": "http://127.0.0.1:2048"
}
}
}
}
Tools
gemini_chat
Send a message to Google Gemini with optional file attachments.
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | Message to send (1-100,000 chars) |
file |
list[string] | No | File paths or glob patterns |
session_id |
string | No | Session ID ("last" for recent) |
model |
string | No | Override model selection |
system_prompt |
string | No | System context |
temperature |
float | No | Sampling temperature (0.0-2.0) |
max_tokens |
int | No | Max response tokens |
response_format |
enum | No | "markdown" or "json" |
Examples:
# Simple query
gemini_chat(prompt="Explain quantum computing")
# With file
gemini_chat(prompt="Review this code", file=["main.py"])
# With image
gemini_chat(prompt="Describe this", file=["photo.png"])
# Continue conversation
gemini_chat(prompt="Tell me more", session_id="last")
# Multiple files
gemini_chat(prompt="Analyze", file=["src/**/*.py"])
gemini_list_models
List available Gemini models.
| Parameter | Type | Required | Description |
|---|---|---|---|
filter_text |
string | No | Filter models by name |
response_format |
enum | No | "markdown" or "json" |
Model Selection
Auto-selects model based on content length:
| Content Size | Model |
|---|---|
| ≤ 8,000 chars | gemini-3-pro-preview |
| > 8,000 chars | gemini-2.5-pro |
| Fallback | gemini-2.5-flash |
Features
Session Management
- Automatic session creation
- Use
"last"to continue recent conversation - LRU eviction (max 50 sessions)
File Support
- Images: PNG, JPG, JPEG, GIF, WebP, BMP
- Text: Any text-based file with auto-encoding detection
- Glob patterns:
*.py,src/**/*.ts, etc.
Built-in Capabilities
reasoning_effort: high- Deep thinking modegoogle_search- Web search integration- Automatic retry with model fallback
Running Standalone
# Start the MCP server
uv run python server.py
Project Structure
aistudio-gemini-mcp/
├── server.py # MCP server implementation
├── pyproject.toml # Project configuration
├── uv.lock # Dependency lock file
├── README.md # This file
├── LICENSE # MIT License
└── mcp_config_example.json
Related Projects
- AIStudioProxyAPI - Backend API service (required)
- Model Context Protocol - MCP specification
License
MIT License - see LICENSE for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。