Gemini MCP Server for Claude Code

Gemini MCP Server for Claude Code

Integrates Google's Gemini AI models into Claude Code and other MCP clients to provide second opinions, code comparisons, and token counting. It supports streaming responses and multi-turn conversations directly within your existing AI development workflow.

Category
访问服务器

README

Gemini MCP Server for Claude Code

Claude Code MCP TypeScript License Node.js

Give Claude Code access to Google's Gemini AI models. Get second opinions, compare approaches, and leverage Gemini's capabilities—all from within your Claude Code session.


Table of Contents


Quick Start

# 1. Clone and build
git clone https://github.com/Raydius/gemini-for-claude-mcp.git
cd gemini-for-claude-mcp
npm install && npm run build

# 2. Add to Claude Code (replace with your actual path and API key)
claude mcp add gemini -e GEMINI_API_KEY=your-api-key -- node $(pwd)/dist/app.js

# 3. Start Claude Code and try it out
claude

Then ask Claude:

"Ask Gemini to explain the tradeoffs between microservices and monoliths"


Features

  • Built for Claude Code - Seamlessly integrates with your Claude Code workflow
  • Streaming Responses - Enabled by default for real-time output
  • Multi-turn Conversations - Maintain context across multiple Gemini queries
  • Configurable Model - Set your preferred Gemini model via environment variable
  • Token Counting - Estimate costs before making queries
  • Type-Safe - Built with strict TypeScript
  • Well-Tested - 100% domain layer test coverage

Prerequisites


Installation & Setup

Step 1: Clone and Build

git clone https://github.com/Raydius/gemini-for-claude-mcp.git
cd gemini-for-claude-mcp
npm install
npm run build

Step 2: Add to Claude Code

Option A: Using the CLI (Recommended)

# Basic setup (substitute default model for any valid Gemini model designation string)
claude mcp add gemini \
  -e GEMINI_API_KEY=your-api-key \
  -e GEMINI_DEFAULT_MODEL=gemini-3-pro-preview \
  -- node /absolute/path/to/gemini-for-claude-mcp/dist/app.js

Option B: Manual Configuration

Edit your Claude Code settings file (~/.claude.json):

{
  "mcpServers": {
    "gemini": {
      "command": "node",
      "args": ["/absolute/path/to/gemini-for-claude-mcp/dist/app.js"],
      "env": {
        "GEMINI_API_KEY": "your-api-key-here",
        "GEMINI_DEFAULT_MODEL": "gemini-2.5-flash"
      }
    }
  }
}

See Configuration for all available options and supported models.

Step 3: Verify Installation

Start Claude Code and verify the server is connected:

claude

Then ask:

"What Gemini models are available?"

If configured correctly, Claude will use the list_gemini_models tool and show you the available models.


Using with Claude Code

Once installed, you can ask Claude to use Gemini in natural language. Here are some examples:

Get a Second Opinion

You: I'm implementing a rate limiter. Can you ask Gemini for its approach?

Claude: I'll query Gemini for an alternative perspective on rate limiting...
[Uses query_gemini tool]

Gemini suggests using a token bucket algorithm. Here's the comparison:
- My approach: Sliding window...
- Gemini's approach: Token bucket with...

Compare Solutions

You: Here's my sorting algorithm. Have Gemini review it and compare approaches.

Claude: Let me get Gemini's analysis of your sorting implementation...
[Uses query_gemini tool]

Gemini's feedback: ...

Leverage Gemini's Strengths

You: Ask Gemini to analyze this mathematical proof for logical errors.

Claude: I'll have Gemini examine the proof...
[Uses query_gemini tool]

Check Token Usage Before Querying

You: How many tokens would this prompt use with Gemini?

Claude: Let me count the tokens...
[Uses count_gemini_tokens tool]

This text would use approximately 1,250 tokens.

Multi-turn Conversations

You: Start a conversation with Gemini about Rust's ownership model.

Claude: [Uses query_gemini tool]
Gemini explains: Rust's ownership model is based on three rules...

You: Ask Gemini to give an example of borrowing.

Claude: [Uses query_gemini with history from previous turn]
Gemini continues: Here's an example of borrowing...

Configuration

Configure the server using environment variables:

Variable Required Default Description
GEMINI_API_KEY Yes - Your Gemini API key from Google AI Studio
GEMINI_DEFAULT_MODEL Yes - Gemini model to use for queries
GEMINI_TIMEOUT_MS No 120000 Request timeout in milliseconds
LOG_LEVEL No info Log level (fatal, error, warn, info, debug, trace)

Other MCP Clients

While this server is optimized for Claude Code, it works with any MCP-compatible client.

Claude Desktop

Add to your Claude Desktop configuration:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "gemini": {
      "command": "node",
      "args": ["/absolute/path/to/gemini-for-claude-mcp/dist/app.js"],
      "env": {
        "GEMINI_API_KEY": "your-api-key-here",
		"GEMINI_DEFAULT_MODEL": "gemini-2.5-flash"
      }
    }
  }
}

Generic stdio Clients

This server uses stdio transport. Start it with:

GEMINI_API_KEY=your-key GEMINI_DEFAULT_MODEL=gemini-2.5-flash node dist/app.js

The server communicates via stdin/stdout using the MCP protocol.


Development

Available Scripts

Command Description
npm run build Compile TypeScript to JavaScript
npm run dev Run in development mode with hot reload
npm start Run the compiled server
npm test Run all tests
npm run test:coverage Run tests with coverage report
npm run lint Check code with ESLint
npm run lint:fix Fix ESLint issues automatically
npm run typecheck Type-check without emitting files

Project Structure

src/
├── domain/           # Business logic (zero external dependencies)
│   ├── entities/     # Business objects (GeminiModel, GeminiPrompt, etc.)
│   ├── ports/        # Interfaces for external services
│   ├── use-cases/    # Application logic
│   └── errors/       # Domain-specific errors
├── infrastructure/   # External integrations
│   ├── adapters/     # Port implementations
│   ├── controllers/  # MCP request handlers
│   ├── schemas/      # Zod validation schemas
│   └── mcp/          # MCP server and tool definitions
├── config/           # Environment validation
├── shared/           # Cross-cutting utilities
└── app.ts            # Entry point

Tool Reference

Technical details for developers integrating with or extending the MCP tools.

query_gemini

Query Google's Gemini AI models for text generation, reasoning, and analysis tasks. The model is configured via the GEMINI_DEFAULT_MODEL environment variable.

Parameters:

Parameter Type Required Default Description
prompt string Yes - The prompt to send to Gemini (1-100,000 chars)
history array No - Previous conversation turns for multi-turn conversations
stream boolean No true Stream response progressively

History Array Item Schema:

{
  "role": "user" | "model",
  "content": "string"
}

Example Response:

{
  "success": true,
  "data": {
    "response": "Recursion is a programming technique where a function calls itself...",
    "model": "gemini-3-pro-preview",
    "finishReason": "STOP",
    "tokenUsage": {
      "prompt": 12,
      "completion": 150,
      "total": 162
    }
  }
}

list_gemini_models

List popular Gemini AI models that can be configured via the GEMINI_DEFAULT_MODEL environment variable.

Parameters: None

Example Response:

{
  "success": true,
  "data": {
    "count": 4,
    "models": [
      {
        "name": "gemini-3-pro-preview",
        "displayName": "Gemini 3 Pro Preview",
        "description": "Most advanced reasoning model with 1M context - best for complex tasks"
      },
      {
        "name": "gemini-2.5-pro",
        "displayName": "Gemini 2.5 Pro",
        "description": "Capable thinking model for complex reasoning, code, math, and STEM"
      },
      {
        "name": "gemini-2.5-flash",
        "displayName": "Gemini 2.5 Flash",
        "description": "Fast and efficient for most tasks with excellent performance"
      },
      {
        "name": "gemini-2.0-flash",
        "displayName": "Gemini 2.0 Flash",
        "description": "Multimodal model optimized for speed and cost-efficiency"
      }
    ]
  }
}

count_gemini_tokens

Count the number of tokens in a text string for the configured Gemini model.

Parameters:

Parameter Type Required Default Description
text string Yes - The text to count tokens for (1-1,000,000 chars)

Example Response:

{
  "success": true,
  "data": {
    "totalTokens": 10,
    "model": "gemini-3-pro-preview"
  }
}

Architecture

This project follows Clean Architecture (Ports and Adapters) principles:

  • Domain Layer - Core business logic with zero external dependencies
  • Infrastructure Layer - External integrations (Gemini SDK, MCP SDK)
  • Strict Dependency Rule - Dependencies always point inward

For detailed architectural documentation, see ARCHITECTURE.md.


Troubleshooting

Claude Code Issues

"Server not found" or tools not appearing

  • Verify the MCP server is added: claude mcp list
  • Check the path to dist/app.js is absolute and correct
  • Ensure the project has been built: npm run build

"GEMINI_API_KEY is required"

  • Verify your API key is set in the MCP configuration
  • Check with: claude mcp list to see environment variables

Server crashes on startup

  • Check Node.js version: node --version (must be 20+)
  • Verify dependencies are installed: npm install

API Issues

"Rate limit exceeded"

  • Gemini API has rate limits; wait and retry
  • Consider using gemini-2.0-flash for higher rate limits

"Content filtered" error

  • Gemini has content safety filters
  • Rephrase your prompt to avoid triggering filters

Streaming not working

  • Streaming is enabled by default
  • Set stream: false in your query if needed

Debug Mode

Enable debug logging for troubleshooting:

claude mcp add gemini -e GEMINI_API_KEY=your-key -e LOG_LEVEL=debug -- node /path/to/dist/app.js

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feat/your-feature
  3. Make your changes following the code standards in CLAUDE.md
  4. Run tests: npm test
  5. Run linting: npm run lint
  6. Commit with conventional commits: git commit -m "feat: add new feature"
  7. Push and create a Pull Request

Code Standards

  • TypeScript strict mode required
  • All exported functions need explicit return types
  • Use neverthrow Result pattern for error handling
  • Validate inputs with Zod at boundaries
  • 100% test coverage for domain layer

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Acknowledgments

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选