Qwen3-Coder MCP Server

Qwen3-Coder MCP Server

Integrates the Qwen3-Coder 30B parameter model with Claude Code through 5 specialized tools for code review, explanation, generation, bug fixing, and optimization. Optimized for 64GB RAM systems with advanced performance settings including flash attention and parallel processing.

Category
访问服务器

README

Qwen3-Coder MCP Server for Claude Code

This setup integrates Qwen3-Coder (30B parameter model) with Claude Code via the Model Context Protocol (MCP), optimized for 64GB RAM systems.

Features

  • Qwen3-Coder 30B: Latest and most powerful Qwen Coder model with exceptional coding capabilities
  • 64GB RAM Optimized: Configuration tuned for maximum performance on high-memory systems
  • MCP Integration: Seamless integration with Claude Code through 5 specialized tools
  • Advanced Settings: Flash attention, optimized KV cache, and parallel processing

Optimization Settings

The setup includes these optimizations for your 64GB RAM:

  • OLLAMA_NUM_PARALLEL=8: Handle 8 parallel requests
  • OLLAMA_MAX_LOADED_MODELS=4: Keep 4 models in memory simultaneously
  • OLLAMA_FLASH_ATTENTION=1: Enable efficient attention mechanism
  • OLLAMA_KV_CACHE_TYPE=q8_0: High-quality 8-bit cache
  • OLLAMA_KEEP_ALIVE=24h: Keep models loaded for 24 hours

Available Tools

1. qwen3_code_review

Reviews code for quality, bugs, and best practices.

Parameters:

  • code (required): The code to review
  • language (optional): Programming language

2. qwen3_code_explain

Provides detailed explanations of how code works.

Parameters:

  • code (required): The code to explain
  • language (optional): Programming language

3. qwen3_code_generate

Generates new code based on requirements.

Parameters:

  • prompt (required): Description of what to generate
  • language (optional): Target programming language

4. qwen3_code_fix

Fixes bugs and issues in existing code.

Parameters:

  • code (required): The buggy code
  • error (optional): Error message or description
  • language (optional): Programming language

5. qwen3_code_optimize

Optimizes code for performance, memory, or readability.

Parameters:

  • code (required): The code to optimize
  • criteria (optional): Optimization criteria
  • language (optional): Programming language

Quick Start

1. Start the Optimized Server

cd /Users/keith/qwencoder
./start-qwen3-optimized.sh

2. Restart Claude Code

Close and reopen Claude Code to load the MCP server configuration.

3. Use in Claude Code

The tools will be automatically available in your Claude Code sessions. You can use them by referencing the tool names in your conversations.

Manual Commands

Start Ollama with optimizations:

OLLAMA_NUM_PARALLEL=8 OLLAMA_MAX_LOADED_MODELS=4 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 ollama serve

Test the model directly:

ollama run qwen3-coder:30b "Write a Python function to calculate factorial"

Test the MCP server:

node qwen3-mcp-server.js

Troubleshooting

If Claude Code doesn't see the MCP server:

  1. Check that the config.json has the correct path
  2. Restart Claude Code completely
  3. Verify Ollama is running: ollama list

If the model is slow:

  1. Ensure you have enough RAM available
  2. Check that OLLAMA_FLASH_ATTENTION=1 is set
  3. Monitor system resources with Activity Monitor

If tools aren't working:

  1. Test Ollama directly: ollama run qwen3-coder:30b "test"
  2. Check MCP server logs in Console.app
  3. Verify the Node.js dependencies are installed

Files Structure

/Users/keith/qwencoder/
├── qwen3-mcp-server.js          # MCP server implementation
├── package.json                 # Node.js dependencies
├── start-qwen3-optimized.sh     # Optimized startup script
└── README.md                    # This file

Configuration Files

  • Claude Config: /Users/keith/Library/Application Support/Claude/config.json
  • MCP Server: /Users/keith/qwencoder/qwen3-mcp-server.js

Performance Notes

With 64GB RAM, you can:

  • Keep multiple large models loaded simultaneously
  • Handle numerous parallel requests
  • Use high-quality cache settings for better performance
  • Run for extended periods without memory issues

The Qwen3-Coder 30B model uses approximately 18GB of RAM when loaded, leaving plenty of room for other applications and additional models.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选