Backlog Manager MCP Server

Backlog Manager MCP Server

A task tracking and backlog management tool that enables AI assistants to create, organize, and track issues and tasks with status workflow through MCP protocol.

Category
访问服务器

README

Backlog Manager MCP Server

A simple task tracking and backlog management MCP server for AI assistants (hack project)

<p align="center"> <img src="https://img.shields.io/badge/Status-Beta-yellow" alt="Status: Beta"> <img src="https://img.shields.io/badge/Python-3.12%2B-green" alt="Python: 3.12+"> </p>

Table of Contents

Overview

Backlog Manager is an MCP (Machine-Consumable Programming) server for issue and task management with a file-based approach. It provides tools for AI agents and other clients to create issues, add tasks to them, and track task status. Issues represent high-level feature requests or bugs, while tasks represent specific work items needed to resolve the issue.

Built using Anthropic's MCP protocol, it supports both SSE and stdio transports for flexible integration with AI assistants like Claude, or other MCP-compatible clients.

Features

  • Issue Management: Create, list, select, and track issues with descriptions
  • Task Tracking: Add tasks to issues with titles, descriptions, and status tracking
  • Status Workflow: Track task progress through New, InWork, and Done states
  • File-Based Storage: Portable JSON storage format for easy backup and version control
  • Flexible Transport: Support for both SSE (HTTP) and stdio communication
  • Docker Support: Run in containers for easy deployment and isolation

Prerequisites

  • Python: 3.12 or higher
  • Package Manager: uv (recommended) or pip
  • Docker: (Optional) For containerized deployment
  • MCP Client: Claude Code, Windsurf, or any other MCP-compatible client

Installation

Using uv (Recommended)

# Clone the repository
git clone https://github.com/username/backlog-manager-mcp.git
cd backlog-manager-mcp

# Install dependencies
uv pip install -e .

# Verify installation
uv run backlog-manager  # This should start the server

Using Docker

# Build the Docker image
docker build -t backlog/manager --build-arg PORT=8050 .

# Run the container
docker run -p 8050:8050 backlog/manager

# Verify container is running
docker ps | grep backlog/manager

Configuration

Configure the server behavior using environment variables in a .env file:

# Create environment file from example
cp .env.example .env

Example .env file content:

# Transport mode: 'sse' or 'stdio'
TRANSPORT=sse

# Server configuration (for SSE transport)
HOST=0.0.0.0
PORT=8050

# Data storage
TASKS_FILE=tasks.json
Variable Description Default Required
TRANSPORT Transport protocol (sse or stdio) sse No
HOST Host to bind to when using SSE transport 0.0.0.0 No
PORT Port to listen on when using SSE transport 8050 No
TASKS_FILE Path to the tasks storage file tasks.json No

Running the Server

Start the Server (SSE Mode)

# Using the CLI command
uv run backlog-manager

# Or directly with Python
uv run src/backlog_manager/main.py

You should see output similar to:

INFO:     Started server process [12345]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8050 (Press CTRL+C to quit)

Note: The server does not support the --help flag since it's designed as an MCP server, not a traditional CLI application.

Using stdio Mode

When using stdio mode, you don't need to start the server separately - the MCP client will start it automatically when configured properly (see Integration with MCP Clients).

MCP Tools

The Backlog Manager exposes the following tools via MCP:

Issue Management

Tool Description Parameters
create_issue Create a new issue name (string), description (string, optional), status (string, optional)
list_issues Show all available issues None
select_issue Set the active issue name (string)
initialize_issue Create or reset an issue name (string), description (string, optional), status (string, optional)
update_issue_status Update issue status name (string), status (string)

Task Management

Tool Description Parameters
add_task Add task to active issue title (string), description (string, optional)
list_tasks List tasks in active issue status (string, optional)
update_task_status Update task status task_id (string), status (string)

Status Values

Tasks and issues can have one of the following statuses:

  • New (default for new tasks/issues)
  • InWork (in progress)
  • Done (completed)

Integration with MCP Clients

SSE Configuration

Once you have the server running with SSE transport, connect to it using this configuration:

{
  "mcpServers": {
    "backlog-manager": {
      "transport": "sse",
      "url": "http://localhost:8050/sse"
    }
  }
}

Windsurf Configuration:

{
  "mcpServers": {
    "backlog-manager": {
      "transport": "sse",
      "serverUrl": "http://localhost:8050/sse"
    }
  }
}

n8n Configuration:

Use host.docker.internal instead of localhost to access the host machine from n8n container:

http://host.docker.internal:8050/sse

Python with Stdio Configuration

{
  "mcpServers": {
    "backlog-manager": {
      "command": "python",
      "args": ["path/to/backlog-manager/src/backlog_manager/main.py"],
      "env": {
        "TRANSPORT": "stdio",
        "TASKS_FILE": "tasks.json"
      }
    }
  }
}

Docker with Stdio Configuration

{
  "mcpServers": {
    "backlog-manager": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "-e", "TRANSPORT=stdio", "backlog/manager"],
      "env": {
        "TRANSPORT": "stdio"
      }
    }
  }
}

Example

Backlog Manager is designed to work seamlessly with AI assistants to help you organize your project work. The most powerful use case is having the AI read specifications and automatically create a structured backlog.

Simply ask your AI assistant:

Read the spec and create a backlog for features not completed.

The AI assistant will:

  1. Read and analyze the specification document
  2. Identify key features and components
  3. Create issues for main functional areas
  4. Break down each issue into specific tasks
  5. Organize everything in a structured backlog

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选