productplan-mcp-server
MCP server for ProductPlan - enables AI assistants to interact with roadmaps, OKRs, and discovery features.
README
ProductPlan MCP Server
Talk to your roadmaps using AI. Ask questions, create ideas, check OKR progress, and manage launches through natural conversation with Claude, Cursor, or other AI assistants.
What can you do with this?
Instead of clicking through ProductPlan's interface, just ask:
"What's on our Q1 roadmap?"
"Show me all objectives that are behind schedule"
"Create a new idea for mobile app improvements"
"What launches are coming up this month?"
"List all ideas tagged 'customer-request'"
The AI fetches your real ProductPlan data and responds in seconds.
Who is this for?
- Product Managers who want faster access to roadmap data
- Team leads who need quick status updates without context-switching
- Anyone using AI assistants (Claude, Cursor, etc.) who wants ProductPlan integrated into their workflow
No coding required. You'll copy a file and paste some settings.
Quick start (5 minutes)
Step 1: Get your ProductPlan API token
- Log into ProductPlan
- Go to Settings → API (or visit this link directly)
- Copy your API token
Step 2: Download the app
Go to the Releases page and download the right file for your computer:
| Your Computer | Download This |
|---|---|
| Mac (M1, M2, M3, M4) | productplan-darwin-arm64 |
| Mac (Intel) | productplan-darwin-amd64 |
| Windows | productplan-windows-amd64.exe |
| Linux | productplan-linux-amd64 |
On Mac/Linux, open Terminal and run these two commands (replace the filename with what you downloaded):
chmod +x ~/Downloads/productplan-darwin-arm64
sudo mv ~/Downloads/productplan-darwin-arm64 /usr/local/bin/productplan
You'll be asked for your password. This is normal.
On Windows:
-
Create a folder for the binary (if it doesn't exist):
mkdir C:\Tools -
Move the downloaded
.exeto that folder and rename it:move %USERPROFILE%\Downloads\productplan-windows-amd64.exe C:\Tools\productplan.exe -
Use the full path
C:\Tools\productplan.exein your AI assistant config (shown in Step 3)
Note: You can skip adding to PATH. Just use the full file path in your configuration.
Step 3: Connect to your AI assistant
Pick the tool you use:
<details> <summary><strong>Claude Desktop</strong> (click to expand)</summary>
-
Find your config file:
- Mac:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Mac:
-
Open it in any text editor and add this (replace
your-tokenwith your actual API token):
Mac/Linux:
{
"mcpServers": {
"productplan": {
"command": "/usr/local/bin/productplan",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
Windows:
{
"mcpServers": {
"productplan": {
"command": "C:\\Tools\\productplan.exe",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
- Restart Claude Desktop
</details>
<details> <summary><strong>Claude Code (Terminal)</strong></summary>
Add to your config file:
- Mac/Linux:
~/.claude.json - Windows:
%USERPROFILE%\.claude.json
Mac/Linux:
{
"mcpServers": {
"productplan": {
"command": "/usr/local/bin/productplan",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
Windows:
{
"mcpServers": {
"productplan": {
"command": "C:\\Tools\\productplan.exe",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
</details>
<details> <summary><strong>Cursor</strong></summary>
- Open Cursor
- Go to Settings → MCP Servers
- Add this configuration:
Mac/Linux:
{
"productplan": {
"command": "/usr/local/bin/productplan",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
Windows:
{
"productplan": {
"command": "C:\\Tools\\productplan.exe",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
Windows users: Use double backslashes (
\\) in the path. This is required because backslash is an escape character in JSON.
</details>
<details> <summary><strong>VS Code + Cline</strong></summary>
- Install the Cline extension
- Open VS Code settings (JSON) and add:
Mac/Linux:
{
"cline.mcpServers": {
"productplan": {
"command": "/usr/local/bin/productplan",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
Windows:
{
"cline.mcpServers": {
"productplan": {
"command": "C:\\Tools\\productplan.exe",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
}
}
</details>
<details> <summary><strong>VS Code + Continue</strong></summary>
- Install the Continue extension
- Add to your config file:
- Mac/Linux:
~/.continue/config.json - Windows:
%USERPROFILE%\.continue\config.json
- Mac/Linux:
Mac/Linux:
{
"mcpServers": [
{
"name": "productplan",
"command": "/usr/local/bin/productplan",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
]
}
Windows:
{
"mcpServers": [
{
"name": "productplan",
"command": "C:\\Tools\\productplan.exe",
"env": {
"PRODUCTPLAN_API_TOKEN": "your-token"
}
}
]
}
</details>
<details> <summary><strong>n8n (Workflow Automation)</strong></summary>
- Set environment variable on your n8n instance:
N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true - Add an MCP Client node to your workflow
- Configure:
- Command:
- Mac/Linux:
/usr/local/bin/productplan - Windows:
C:\Tools\productplan.exe
- Mac/Linux:
- Environment Variables:
PRODUCTPLAN_API_TOKEN=your-token
- Command:
- Connect to an AI Agent node
Example workflow: Slack Trigger → AI Agent (with MCP Client) → Slack Response
</details>
Step 4: Start asking questions
Open your AI assistant and try:
- "List my ProductPlan roadmaps"
- "What bars are on roadmap [name]?"
- "Show me our OKRs"
- "What ideas are in discovery?"
Real-world use cases
Morning standup prep
"Summarize what changed on our Product Roadmap in the last week"
Stakeholder updates
"List all Q1 objectives and their progress"
Idea triage
"Show me all ideas tagged 'enterprise' that don't have a priority set"
Launch coordination
"What tasks are still incomplete for the January launch?"
Quick lookups
"When is the 'Mobile App v2' bar scheduled to start?"
What ProductPlan data can you access?
| Feature | View | Create | Edit | Delete |
|---|---|---|---|---|
| Roadmaps | Yes | - | - | - |
| Roadmap Comments | Yes | - | - | - |
| Bars (roadmap items) | Yes | Yes | Yes | Yes |
| Bar Comments | Yes | - | - | - |
| Bar Connections | Yes | Yes | - | Yes |
| Bar Links | Yes | Yes | - | Yes |
| Lanes (categories) | Yes | Yes | Yes | Yes |
| Legends (bar colors) | Yes | - | - | - |
| Milestones | Yes | Yes | Yes | Yes |
| Ideas (Discovery) | Yes | Yes | Yes | - |
| Idea Customers | Yes | - | - | - |
| Idea Tags | Yes | - | - | - |
| Opportunities | Yes | Yes | Yes | - |
| Idea Forms | Yes | - | - | - |
| Objectives (OKRs) | Yes | Yes | Yes | Yes |
| Key Results | Yes | Yes | Yes | Yes |
| Launches | Yes | Yes | Yes | Yes |
| Launch Sections | Yes | Yes | Yes | Yes |
| Launch Tasks | Yes | Yes | Yes | Yes |
| Users | Yes | - | - | - |
| Teams | Yes | - | - | - |
How it works
┌─────────────────┐ spawns ┌─────────────────┐ API calls ┌─────────────────┐
│ AI Assistant │ ───────────────── │ MCP Server │ ─────────────────▶ │ ProductPlan │
│ (Claude, Cursor)│ ◀───────────────▶ │ (this binary) │ ◀───────────────── │ API │
└─────────────────┘ stdin/stdout └─────────────────┘ JSON data └─────────────────┘
your computer your computer cloud
Why does this need to run on your computer?
MCP (Model Context Protocol) works through a subprocess model. Your AI assistant doesn't connect to a remote server; it spawns the binary as a local process and communicates via stdin/stdout. This architecture means:
- The binary must exist locally because your AI assistant runs it as a child process
- Your API token stays on your machine, never passing through third-party servers
- Real-time, synchronous communication without network latency between AI and the MCP server
- Works offline for cached data (though ProductPlan API calls still need internet)
When you ask "What's on our Q1 roadmap?", here's what happens:
- Your AI assistant recognizes it needs ProductPlan data
- It sends a structured request to the MCP server process
- The binary translates this into ProductPlan API calls
- ProductPlan returns JSON data
- The binary formats and returns results to your AI
- Your AI presents the answer in natural language
Agent Skills
Pre-built workflow guides that teach AI assistants how to use ProductPlan tools effectively. Each skill targets a specific persona with tailored workflows.
| Skill | Audience | Focus |
|---|---|---|
| productplan-workflows | General | Core patterns and tool reference |
| productplan-pm | Product Managers | Full toolkit: roadmaps, OKRs, ideas, launches |
| productplan-leadership | Executives | Portfolio health, cross-roadmap views |
| productplan-customer-facing | Sales & CS | Customer-ready roadmap timelines |
Shared Principles
All skills follow these output conventions:
- No raw JSON - Format responses as readable text and tables
- Human-readable dates - Use "March 2025" or "Q1 2025", not "2025-03-15"
- Summarize large lists - Don't overwhelm with 50 items; offer to expand
Persona-specific variations:
- PM includes
bar_idfor follow-up actions - Leadership leads with executive summary, hides implementation details
- Customer-facing omits internal IDs, lane names, and OKRs entirely
To use a skill, copy the SKILL.md file to your Claude Code skills directory:
# Copy a skill (example: PM skill)
cp skills/productplan-pm/SKILL.md ~/.claude/skills/productplan-pm.md
Or reference skills directly in your prompts:
"Use the productplan-pm workflow to show me our Q1 roadmap"
Troubleshooting
"Command not found" or "spawn ENOENT"
Your AI assistant can't find the binary. This means:
- Mac/Linux: The file isn't at
/usr/local/bin/productplan, or you forgot to runchmod +x - Windows: The path in your config doesn't match where you saved the
.exe
Fix: Verify the binary exists at the path in your config. Run ls -la /usr/local/bin/productplan (Mac/Linux) or check if C:\Tools\productplan.exe exists (Windows).
Windows path issues
Common mistakes on Windows:
| Wrong | Correct |
|---|---|
/usr/local/bin/productplan |
C:\\Tools\\productplan.exe |
C:\Tools\productplan.exe (single backslash in JSON) |
C:\\Tools\\productplan.exe |
productplan (no path) |
C:\\Tools\\productplan.exe |
Missing .exe extension |
Include .exe in the path |
Windows uses backslashes (\) for paths, but JSON treats backslash as an escape character. You must double them (\\) in your config file.
"Invalid API token"
Double-check your token at ProductPlan Settings → API. Tokens can expire or be regenerated. Make sure you copied the full token without extra spaces.
"No roadmaps found"
Your API token only accesses data you have permission to see in ProductPlan. Check that your account has access to the roadmaps you're looking for.
AI assistant doesn't see ProductPlan tools
MCP servers load when your AI assistant starts, not when configs change. After editing your config file, fully quit and restart the application. On Mac, use Cmd+Q (not just closing the window).
"Permission denied" on Mac/Linux
The binary needs execute permission. Run:
chmod +x /usr/local/bin/productplan
Command line (optional)
You can also use this tool directly in Terminal without an AI assistant:
# First, set your token
export PRODUCTPLAN_API_TOKEN="your-token"
# Then run commands
productplan status # Check connection
productplan roadmaps # List all roadmaps
productplan bars 12345 # List bars in roadmap #12345
productplan objectives # List all OKRs
productplan ideas # List all ideas
productplan opportunities # List all opportunities
productplan launches # List all launches
Background info
What is MCP?
Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools. Anthropic created it; other AI providers are adopting it. This server implements MCP so your AI assistant can read and write ProductPlan data.
What is ProductPlan?
ProductPlan is roadmap software used by 4,000+ product teams. It handles roadmaps, OKRs, idea discovery, and launch coordination.
For Developers
<details> <summary>Project structure</summary>
productplan-mcp-server/
├── cmd/productplan/main.go # Entry point (~100 lines)
├── internal/
│ ├── api/ # ProductPlan API client
│ │ ├── client.go # HTTP client with caching, retry, rate limiting
│ │ ├── endpoints.go # 40+ API endpoint methods
│ │ └── formatters.go # Response enrichment for AI
│ ├── mcp/ # MCP protocol implementation
│ │ ├── server.go # JSON-RPC server, stdio I/O
│ │ ├── handler.go # Tool dispatch via registry
│ │ └── types.go # Protocol types
│ ├── tools/ # Tool definitions and handlers
│ │ ├── registry.go # Tool registration and dispatch
│ │ └── types.go # Typed argument structs for handlers
│ ├── cli/ # CLI commands (status, roadmaps, etc.)
│ │ └── cli.go
│ └── logging/ # Structured JSON logging
│ └── logger.go
├── pkg/productplan/ # Reusable utilities
│ ├── cache.go # LRU cache with TTL
│ ├── retry.go # Exponential backoff with jitter
│ ├── ratelimit.go # Adaptive rate limiting
│ ├── registry.go # ToolBuilder for schema generation
│ ├── requestid.go # Request tracing
│ └── errors.go # Error suggestions
└── evals/ # LLM evaluation test suite
├── tool_selection.json
├── confusion_pairs.json
└── argument_correctness.json
</details>
<details> <summary>Build from source</summary>
git clone https://github.com/olgasafonova/productplan-mcp-server.git
cd productplan-mcp-server
go build -o productplan ./cmd/productplan
Build for all platforms:
# macOS Apple Silicon
GOOS=darwin GOARCH=arm64 go build -o dist/productplan-darwin-arm64 ./cmd/productplan
# macOS Intel
GOOS=darwin GOARCH=amd64 go build -o dist/productplan-darwin-amd64 ./cmd/productplan
# Linux
GOOS=linux GOARCH=amd64 go build -o dist/productplan-linux-amd64 ./cmd/productplan
# Windows
GOOS=windows GOARCH=amd64 go build -o dist/productplan-windows-amd64.exe ./cmd/productplan
</details>
<details> <summary>Testing</summary>
Run all tests:
go test ./...
Run with coverage:
go test ./... -cover
Run benchmarks:
go test ./internal/... -bench=. -benchmem
Run evaluation suite:
./scripts/run-evals.sh
Coverage targets:
| Package | Coverage |
|---|---|
| internal/mcp | 97% |
| internal/logging | 97% |
| internal/api | 95% |
| internal/cli | 95% |
| internal/tools | 90% |
</details>
<details> <summary>MCP tool reference</summary>
47 tools available: 35 READ tools and 12 WRITE tools (action-based):
Read tools:
- Roadmaps:
list_roadmaps,get_roadmap,get_roadmap_bars,get_roadmap_lanes,get_roadmap_milestones,get_roadmap_legends,get_roadmap_comments,get_roadmap_complete - Bars:
get_bar,get_bar_children,get_bar_comments,get_bar_connections,get_bar_links - OKRs:
list_objectives,get_objective,list_key_results,get_key_result - Discovery:
list_ideas,get_idea,list_all_customers,list_all_tags,list_opportunities,get_opportunity,list_idea_forms,get_idea_form - Launches:
list_launches,get_launch,get_launch_sections,get_launch_section,get_launch_tasks,get_launch_task - Admin:
check_status,health_check,list_users,list_teams
Write tools:
- Roadmaps:
manage_bar,manage_lane,manage_milestone - Bar relationships:
manage_bar_connection,manage_bar_link - OKRs:
manage_objective,manage_key_result - Discovery:
manage_idea,manage_opportunity - Launches:
manage_launch,manage_launch_section,manage_launch_task
Example:
{"tool": "list_roadmaps", "arguments": {}}
{"tool": "manage_bar", "arguments": {"action": "create", "roadmap_id": "123", "lane_id": "456", "name": "New feature"}}
{"tool": "manage_idea", "arguments": {"action": "create", "name": "Mobile app improvements"}}
</details>
<details> <summary>Architecture</summary>
The server uses a clean layered architecture:
┌──────────────────────────────────────────────────────────────┐
│ cmd/productplan │
│ (entry point, DI) │
└──────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ internal/cli │ │ internal/mcp │ │internal/tools │
│ (CLI cmds) │ │ (JSON-RPC IO) │ │ (handlers) │
└───────────────┘ └───────────────┘ └───────────────┘
│ │
└──────────┬──────────┘
▼
┌───────────────────┐
│ internal/api │
│ (HTTP client) │
└───────────────────┘
│
▼
┌───────────────────┐
│ ProductPlan API │
└───────────────────┘
Key interfaces:
// Tool handler interface (internal/mcp)
type Handler interface {
Handle(ctx context.Context, args map[string]any) (json.RawMessage, error)
}
// Logger interface (internal/logging)
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
}
Logging format:
{"ts":"2024-12-26T10:30:00Z","level":"info","req_id":"ab12","op":"get_roadmap_bars","dur_ms":245}
</details>
Changelog
See CHANGELOG.md for release history and detailed changes.
Like This Project?
If this server saved you time, consider giving it a ⭐ on GitHub. It helps others discover the project.
More MCP Servers
Check out my other MCP servers:
| Server | Description | Stars |
|---|---|---|
| gleif-mcp-server | Access GLEIF LEI database. Look up company identities, verify legal entities. | |
| mediawiki-mcp-server | Connect AI to any MediaWiki wiki. Search, read, edit wiki content. | |
| miro-mcp-server | Control Miro whiteboards with AI. Boards, diagrams, mindmaps, and more. | |
| nordic-registry-mcp-server | Access Nordic business registries. Look up companies across Norway, Denmark, Finland, Sweden. |
License
MIT License - see LICENSE
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。