mcp-linkedin

mcp-linkedin

MCP server for LinkedIn publishing from AI coding assistants. Dry run by default, SKILL.md for safe workflows, 28 unit tests. MIT licensed.

Category
访问服务器

README

mcp-linkedin

An MCP server that lets AI assistants publish to LinkedIn on your behalf.

mcp-linkedin MCP server

What it does

This is a Model Context Protocol (MCP) server that wraps the Unipile API to give AI assistants (Claude Code, Claude Desktop, or any MCP-compatible client) the ability to create posts, comments, and reactions on LinkedIn. The AI writes the content; this tool handles the publishing. All publishing actions default to preview mode — nothing goes live without explicit confirmation.

Features

  • 3 tools: publish, comment, react
  • Dry run by default (preview before publishing)
  • Auto-likes posts immediately after publishing
  • Media attachments (local files or URLs — images and video)
  • Company @mentions (auto-resolved via Unipile)
  • Works with Claude Code, Claude Desktop, and any MCP client

Prerequisites

  • Node.js 18+ — uses ES modules, node:test, and top-level await
  • Unipile accountUnipile is the service that connects to LinkedIn's API. Sign up, connect your LinkedIn account, and get your API key and DSN from the dashboard.

Installation

git clone https://github.com/timkulbaev/mcp-linkedin.git
cd mcp-linkedin
npm install

Configuration

Claude Code

Add to ~/.claude/mcp.json:

{
  "mcpServers": {
    "linkedin": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-linkedin/index.js"],
      "env": {
        "UNIPILE_API_KEY": "your-unipile-api-key",
        "UNIPILE_DSN": "apiXX.unipile.com:XXXXX"
      }
    }
  }
}

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):

{
  "mcpServers": {
    "linkedin": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-linkedin/index.js"],
      "env": {
        "UNIPILE_API_KEY": "your-unipile-api-key",
        "UNIPILE_DSN": "apiXX.unipile.com:XXXXX"
      }
    }
  }
}

Restart Claude Code or Claude Desktop after editing the config.

Environment variables

Variable Required Description
UNIPILE_API_KEY Yes Your Unipile API key (from the Unipile dashboard)
UNIPILE_DSN Yes Your Unipile DSN (e.g. api16.unipile.com:14648)

These are passed via the MCP config, not a .env file. The server reads them from process.env at startup.

Tools

linkedin_publish

Creates an original LinkedIn post.

dry_run defaults to true. Call with dry_run: true first to get a preview, then call again with dry_run: false to actually publish.

Parameter Type Required Default Description
text string yes Post body, max 3000 characters
media string[] no [] Local file paths or URLs (jpg, png, gif, webp, mp4)
mentions string[] no [] Company names to @mention (auto-resolved)
dry_run boolean no true Preview without publishing

Preview response (dry_run: true):

{
  "status": "preview",
  "post_text": "Hello LinkedIn!",
  "character_count": 16,
  "character_limit": 3000,
  "media": [],
  "mentions": [],
  "warnings": [],
  "ready_to_publish": true
}

Publish response (dry_run: false):

{
  "status": "published",
  "post_id": "7437514186450104320",
  "post_text": "Hello LinkedIn!",
  "posted_at": "2026-03-11T15:06:04.849Z",
  "auto_like": "liked"
}

After publish, save the post_id and construct the post URL:

https://www.linkedin.com/feed/update/urn:li:activity:{post_id}/

linkedin_comment

Posts a comment on an existing LinkedIn post.

dry_run defaults to true.

Parameter Type Required Default Description
post_url string yes LinkedIn post URL or raw URN (urn:li:activity:... or urn:li:ugcPost:...)
text string yes Comment text
dry_run boolean no true Preview without posting

linkedin_react

Reacts to a LinkedIn post. This action is immediate — there is no dry_run.

Parameter Type Required Default Description
post_url string yes LinkedIn post URL or raw URN
reaction_type string no "like" One of: like, celebrate, support, love, insightful, funny

How it works

                    ┌──────────────────────────────────┐
                    │           mcp-linkedin            │
AI Assistant  ──►   │                                  │
(via MCP stdio)     │  Posts/Comments/Reactions  ──►  Unipile API  ──►  LinkedIn
                    └──────────────────────────────────┘
  • The AI assistant calls tools via MCP's JSON-RPC protocol over stdio
  • Calls Unipile API which handles LinkedIn OAuth — no token management needed

Safe publishing workflow

The dry_run default exists to prevent accidental publishing. The intended flow:

  1. AI calls the tool with dry_run: true (the default)
  2. You see the preview: final text, character count, media validation, resolved mentions, warnings
  3. You confirm or ask for changes
  4. AI calls again with dry_run: false
  5. Post goes live

dry_run is true by default. The AI cannot publish without explicitly setting it to false, which requires going through the preview step first.

Media handling

  • Pass local file paths (/path/to/image.jpg) or URLs (https://example.com/img.png)
  • URLs are downloaded to /tmp/mcp-linkedin-media/ and cleaned up after publish (whether it succeeds or fails)
  • Supported formats: jpg, jpeg, png, gif, webp (images), mp4 (video)
  • Each file is validated before upload: must exist, be non-empty, and be a supported type
  • Failed files appear in the preview's media array with "valid": false and an error message

Company @mentions

  • Pass company names as strings: mentions: ["Microsoft", "OpenAI"]
  • The server slugifies each name and looks it up via Unipile's LinkedIn company search
  • Resolved companies are injected as {{0}}, {{1}} placeholders in the post text — LinkedIn renders these as clickable @mentions
  • If a company name appears in the post text, it gets replaced in place; if not, the placeholder is appended
  • Unresolved names appear as warnings in the preview. The post can still be published without them.

Testing

npm test       # 28 unit tests, zero extra dependencies (Node.js built-in test runner)
npm run lint   # Biome linter

Project structure

mcp-linkedin/
  index.js                    Entry point (stdio transport)
  package.json
  src/
    server.js                 MCP server and tool registration
    unipile-client.js         Unipile API wrapper (posts, comments, reactions)
    media-handler.js          URL download and file validation
    tools/
      publish.js              linkedin_publish handler
      comment.js              linkedin_comment handler
      react.js                linkedin_react handler
  tests/
    unit.test.js              28 unit tests

Getting a Unipile account

  1. Sign up for a Unipile account
  2. In the dashboard, connect your LinkedIn account
  3. Copy your API key and DSN from the dashboard settings
  4. Paste them into the MCP config (see Configuration above)

Unipile has a free tier that covers basic usage.

License

MIT — see LICENSE.

Credits

Built by Timur Kulbaev. Uses the Model Context Protocol by Anthropic and the Unipile API.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选