DeepSeek MCP Server

DeepSeek MCP Server

Allows seamless integration of DeepSeek's language models with MCP-compatible applications like Claude Desktop, supporting features such as model selection, temperature control, and multi-turn conversations with automatic model fallback.

Category
访问服务器

Tools

chat_completion

multi_turn_chat

README

DeepSeek MCP Server

A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.

Anonymously use DeepSeek API -- Only a proxy is seen on the other side

<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>

npm version npm downloads GitHub issues GitHub forks GitHub stars GitHub license

Installation

Installing via Smithery

To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude

Manual Installation

npm install -g deepseek-mcp-server

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "deepseek": {
      "command": "npx",
      "args": [
        "-y",
        "deepseek-mcp-server"
      ],
      "env": {
        "DEEPSEEK_API_KEY": "your-api-key"
      }
    }
  }
}

Features

Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:

  • User: "What models are available?"
    • Response: Shows list of available models and their capabilities via the models resource.
  • User: "What configuration options do I have?"
    • Response: Lists all available configuration options via the model-config resource.
  • User: "What is the current temperature setting?"
    • Response: Displays the current temperature setting.
  • User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and allow 8000 tokens."
    • Response: Starts a multi-turn conversation with the specified settings.

Automatic Model Fallback if R1 is down

  • If the primary model (R1) is down (called deepseek-reasoner in the server), the server will automatically attempt to try with v3 (called deepseek-chat in the server)

Note: You can switch back and forth anytime as well, by just giving your prompt and saying "use deepseek-reasoner" or "use deepseek-chat"

  • V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token usage

Resource discovery for available models and configurations:

  • Custom model selection
  • Temperature control (0.0 - 2.0)
  • Max tokens limit
  • Top P sampling (0.0 - 1.0)
  • Presence penalty (-2.0 - 2.0)
  • Frequency penalty (-2.0 - 2.0)

Enhanced Conversation Features

Multi-turn conversation support:

  • Maintains complete message history and context across exchanges
  • Preserves configuration settings throughout the conversation
  • Handles complex dialogue flows and follow-up chains automatically

This feature is particularly valuable for two key use cases:

  1. Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.

  2. Complex Interactions: For production use, this helps manage longer conversations where context is crucial:

    • Multi-step reasoning problems
    • Interactive troubleshooting sessions
    • Detailed technical discussions
    • Any scenario where context from earlier messages impacts later responses

The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.

Testing with MCP Inspector

You can test the server locally using the MCP Inspector tool:

  1. Build the server:

    npm run build
    
  2. Run the server with MCP Inspector:

    # Make sure to specify the full path to the built server
    npx @modelcontextprotocol/inspector node ./build/index.js
    

The inspector will open in your browser and connect to the server via stdio transport. You can:

  • View available tools
  • Test chat completions with different parameters
  • Debug server responses
  • Monitor server performance

Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.

License

MIT

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选