DeepSeek MCP Server

DeepSeek MCP Server

Model Context Protocol server for DeepSeek's advanced language models - DMontgomery40/deepseek-mcp-server

语言翻译
语音处理
访问服务器

README

DeepSeek MCP Server

A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.

Anonymously use DeepSeek API -- Only a proxy is seen on the other side

<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>

npm version npm downloads GitHub issues GitHub forks GitHub stars GitHub license

Installation

Installing via Smithery

To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude

Manual Installation

npm install -g deepseek-mcp-server

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "deepseek": {
      "command": "npx",
      "args": [
        "-y",
        "deepseek-mcp-server"
      ],
      "env": {
        "DEEPSEEK_API_KEY": "your-api-key"
      }
    }
  }
}

Features

Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:

  • User: "What models are available?"
    • Response: Shows list of available models and their capabilities via the models resource.
  • User: "What configuration options do I have?"
    • Response: Lists all available configuration options via the model-config resource.
  • User: "What is the current temperature setting?"
    • Response: Displays the current temperature setting.
  • User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and allow 8000 tokens."
    • Response: Starts a multi-turn conversation with the specified settings.

Automatic Model Fallback if R1 is down

  • If the primary model (R1) is down (called deepseek-reasoner in the server), the server will automatically attempt to try with v3 (called deepseek-chat in the server)

Note: You can switch back and forth anytime as well, by just giving your prompt and saying "use deepseek-reasoner" or "use deepseek-chat"

  • V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token usage

Resource discovery for available models and configurations:

  • Custom model selection
  • Temperature control (0.0 - 2.0)
  • Max tokens limit
  • Top P sampling (0.0 - 1.0)
  • Presence penalty (-2.0 - 2.0)
  • Frequency penalty (-2.0 - 2.0)

Enhanced Conversation Features

Multi-turn conversation support:

  • Maintains complete message history and context across exchanges
  • Preserves configuration settings throughout the conversation
  • Handles complex dialogue flows and follow-up chains automatically

This feature is particularly valuable for two key use cases:

  1. Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.

  2. Complex Interactions: For production use, this helps manage longer conversations where context is crucial:

    • Multi-step reasoning problems
    • Interactive troubleshooting sessions
    • Detailed technical discussions
    • Any scenario where context from earlier messages impacts later responses

The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.

Testing with MCP Inspector

You can test the server locally using the MCP Inspector tool:

  1. Build the server:

    npm run build
    
  2. Run the server with MCP Inspector:

    # Make sure to specify the full path to the built server
    npx @modelcontextprotocol/inspector node ./build/index.js
    

The inspector will open in your browser and connect to the server via stdio transport. You can:

  • View available tools
  • Test chat completions with different parameters
  • Debug server responses
  • Monitor server performance

Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.

License

MIT

推荐服务器

mcp-server-youtube-transcript

mcp-server-youtube-transcript

一个模型上下文协议服务器,可以从 YouTube 视频中检索文本记录。该服务器通过一个简单的界面直接访问视频的字幕和副标题。

精选
JavaScript
Zonos TTS MCP Server

Zonos TTS MCP Server

使用 Claude 促进多种语言和情感的直接语音生成,并通过模型上下文协议与 Zonos TTS 设置集成。

本地
TypeScript
MS-Lucidia-Voice-Gateway-MCP

MS-Lucidia-Voice-Gateway-MCP

一个服务器,使用 Windows 原生语音服务提供文本转语音和语音转文本功能,无需外部依赖。

本地
JavaScript
Say MCP Server

Say MCP Server

在 macOS 上使用 `say` 命令启用文本转语音功能,从而可以广泛控制语音参数,例如声音、语速、音量和音调,以获得可定制的听觉体验。

本地
JavaScript
mcp-hfspace

mcp-hfspace

直接从 Claude 使用 Hugging Face Spaces。使用开源图像生成、聊天、视觉任务等。支持图像、音频和文本上传/下载。

本地
TypeScript
Speech MCP

Speech MCP

一个 Goose MCP 扩展程序,提供与现代音频可视化的语音交互功能,允许用户通过语音而非文本与 Goose 进行交流。

本地
Python
Perplexity MCP Server

Perplexity MCP Server

一个 Node.js 实现,它使 Claude 能够通过 Anthropic 的模型上下文协议与 Perplexity AI 的语言模型进行交互,从而提供高级聊天补全和快速查询的工具。

本地
ClickSend MCP Server

ClickSend MCP Server

这个服务器使 AI 模型能够通过 ClickSend 的 API 以编程方式发送短信和发起文本转语音呼叫,并内置了速率限制和输入验证。

JavaScript
Spotify MCP

Spotify MCP

一个 FastMCP 工具,它允许用户通过 Cursor Composer 中的自然语言命令来控制 Spotify,从而管理播放、搜索内容以及与播放列表互动。

Python
ElevenLabs MCP Server

ElevenLabs MCP Server

与 ElevenLabs 文本转语音 API 集成。

Python