Valkey MCP Task Management Server
Enables AI agents to create, manage, and track tasks within plans using Valkey as the persistence layer. Supports plan and task management with Markdown notes, status tracking, and prioritization through multiple transport protocols (SSE, Streamable HTTP, STDIO).
README
Valkey MCP Task Management Server
A task management system that implements the Model Context Protocol (MCP) for seamless integration with agentic AI tools. This system allows AI agents to create, manage, and track tasks within plans using Valkey as the persistence layer.
Features
- Plan management (create, read, update, delete)
- Task management (create, read, update, delete)
- Task ordering and prioritization
- Status tracking for tasks
- Notes support with Markdown formatting for both plans and tasks
- MCP server for AI agent integration
- Supports STDIO, SSE and Streamable HTTP transport protocols
- Docker container support for easy deployment
Architecture
The system is built using:
- Go: For the backend implementation
- Valkey: For data persistence
- Valkey-Glide v2: Official Go client for Valkey
- Model Context Protocol: For AI agent integration
Quick Start
Docker Deployment
The MCP server is designed to run one protocol at a time for simplicity. By default, all protocols are disabled and you need to explicitly enable the one you want to use.
Prerequisites
- Create a named volume for Valkey data persistence:
docker volume create valkey-data
Running with SSE (Recommended for most use cases)
docker run -d --name valkey-mcp \
-p 8080:8080 \
-p 6379:6379 \
-v valkey-data:/data \
-e ENABLE_SSE=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
Running with Streamable HTTP
docker run -d --name valkey-mcp \
-p 8080:8080 \
-p 6379:6379 \
-v valkey-data:/data \
-e ENABLE_STREAMABLE_HTTP=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
Running with STDIO (For direct process communication)
docker run -i --rm --name valkey-mcp \
-v valkey-data:/data \
-e ENABLE_STDIO=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
Using the Container Images
The container images are published to GitHub Container Registry and can be pulled using:
docker pull ghcr.io/jbrinkman/valkey-ai-tasks:latest
# or a specific version
docker pull ghcr.io/jbrinkman/valkey-ai-tasks:1.1.0
MCP API Reference
The MCP server supports two transport protocols: Server-Sent Events (SSE) and Streamable HTTP. Each protocol exposes similar endpoints but with different interaction patterns.
Server-Sent Events (SSE) Endpoints
GET /sse/list_functions: Lists all available functionsPOST /sse/invoke/{function_name}: Invokes a function with the given parameters
Streamable HTTP Endpoints
POST /mcp: Handles all MCP requests using JSON format- For function listing:
{"method": "list_functions", "params": {}} - For function invocation:
{"method": "invoke", "params": {"function": "function_name", "params": {...}}}
- For function listing:
Transport Selection
The server automatically selects the appropriate transport based on:
- URL Path: Connect to the specific endpoint for your preferred transport
- Content Type: When connecting to the root path (
/), the server redirects based on content type:application/json→ Streamable HTTP- Other content types → SSE
Health Check
GET /health: Returns server health status
Available Functions
Plan Management
create_plan: Create a new planget_plan: Get a plan by IDlist_plans: List all planslist_plans_by_application: List all plans for a specific applicationupdate_plan: Update an existing plandelete_plan: Delete a plan by IDupdate_plan_notes: Update notes for a planget_plan_notes: Get notes for a plan
Task Management
create_task: Create a new task in a planget_task: Get a task by IDlist_tasks_by_plan: List all tasks in a planlist_tasks_by_status: List all tasks with a specific statusupdate_task: Update an existing taskdelete_task: Delete a task by IDreorder_task: Change the order of a task within its planupdate_task_notes: Update notes for a taskget_task_notes: Get notes for a task
MCP Configuration
Local MCP Configuration
To configure an AI agent to use the local MCP server, add the following to your MCP configuration file (the exact file location depends on your AI Agent):
Using SSE Transport (Default)
Note: The docker container should already be running.
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://localhost:8080/sse"
}
}
}
Using Streamable HTTP Transport
Note: The docker container should already be running.
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://localhost:8080/mcp"
}
}
}
Using STDIO Transport
STDIO transport allows the MCP server to communicate via standard input/output, which is useful for legacy AI tools that rely on stdin/stdout for communication.
For agentic tools that need to start and manage the MCP server process, use a configuration like this:
{
"mcpServers": {
"valkey-tasks": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-v", "valkey-data:/data"
"-e", "ENABLE_STDIO=true",
"ghcr.io/jbrinkman/valkey-ai-tasks:latest"
]
}
}
}
Docker MCP Configuration
When running in Docker, use the container name as the hostname:
Using SSE Transport (Default)
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://valkey-mcp-server:8080/sse"
}
}
}
Notes Functionality
The system supports rich Markdown-formatted notes for both plans and tasks. This feature is particularly useful for AI agents to maintain context between sessions and document important information.
Notes Features
- Full Markdown support including:
- Headings, lists, and tables
- Code blocks with syntax highlighting
- Links and images
- Emphasis and formatting
- Separate notes for plans and tasks
- Dedicated MCP tools for managing notes
- Notes are included in all relevant API responses
Best Practices for Notes
- Maintain Context: Use notes to document important context that should persist between sessions
- Document Decisions: Record key decisions and their rationale
- Track Progress: Use notes to track progress and next steps
- Organize Information: Use Markdown formatting to structure information clearly
- Code Examples: Include code snippets with proper syntax highlighting
Notes Security
Notes content is sanitized to prevent XSS and other security issues while preserving Markdown formatting.
MCP Resources
In addition to MCP tools, the system provides MCP resources that allow AI agents to access structured data directly. These resources provide a complete view of plans and tasks in a single request, which is more efficient than making multiple tool calls.
Available Resources
Plan Resource
The Plan Resource provides a complete view of a plan, including its tasks and notes. It supports the following URI patterns:
- Single Plan:
ai-tasks://plans/{id}/full- Returns a specific plan with its tasks - All Plans:
ai-tasks://plans/full- Returns all plans with their tasks - Application Plans:
ai-tasks://applications/{app_id}/plans/full- Returns all plans for a specific application
Each resource returns a JSON object or array with the following structure:
{
"id": "plan-123",
"application_id": "my-app",
"name": "New Feature Development",
"description": "Implement new features for the application",
"status": "new",
"notes": "# Project Notes\n\nThis project aims to implement the following features...",
"created_at": "2025-06-27T14:00:21Z",
"updated_at": "2025-07-01T13:04:01Z",
"tasks": [
{
"id": "task-456",
"plan_id": "plan-123",
"title": "Task 1",
"description": "Description for task 1",
"status": "pending",
"priority": "high",
"order": 0,
"notes": "# Task Notes\n\nThis task requires the following steps...",
"created_at": "2025-06-27T14:00:50Z",
"updated_at": "2025-07-01T12:04:27Z"
},
// Additional tasks...
]
}
Using MCP Resources
AI agents can access these resources using the MCP resource API. Here's an example of how to read a resource:
{
"action": "read_resource",
"params": {
"uri": "ai-tasks://plans/123/full"
}
}
This will return the complete plan resource including all tasks, which is more efficient than making separate calls to get the plan and then its tasks.
Using with AI Agents
AI agents can interact with this task management system through the MCP API using either SSE or Streamable HTTP transport. Here are examples for both transport protocols:
Using SSE Transport
- The agent calls
/sse/list_functionsto discover available functions - The agent calls
/sse/invoke/create_planwith parameters:{ "application_id": "my-app", "name": "New Feature Development", "description": "Implement new features for the application", "notes": "# Project Notes\n\nThis project aims to implement the following features:\n\n- Feature A\n- Feature B\n- Feature C" } - The agent can add tasks to the plan using either:
- Individual task creation with
/sse/invoke/create_task - Bulk task creation with
/sse/invoke/bulk_create_tasksfor multiple tasks at once:{ "plan_id": "plan-123", "tasks_json": "[ { \"title\": \"Task 1\", \"description\": \"Description for task 1\", \"priority\": \"high\", \"status\": \"pending\", \"notes\": \"# Task Notes\\n\\nThis task requires the following steps:\\n\\n1. Step one\\n2. Step two\\n3. Step three\" }, { \"title\": \"Task 2\", \"description\": \"Description for task 2\", \"priority\": \"medium\", \"status\": \"pending\" } ]" }
- Individual task creation with
- The agent calls
/sse/invoke/update_taskto update task status as work progresses
Sample Agent Prompt
Here's a sample prompt that would trigger an AI agent to use the MCP task management system:
I need to organize work for my new application called "inventory-manager".
Create a plan for this application with the following plan notes:
"# Inventory Manager Project
This project aims to create a comprehensive inventory management system with the following goals:
- Track inventory levels in real-time
- Generate reports on inventory movement
- Provide alerts for low stock items"
Add the following tasks:
1. Set up database schema
2. Implement REST API endpoints
3. Create user authentication system
4. Design frontend dashboard
5. Implement inventory tracking features
For the database schema task, add these notes:
"# Database Schema Notes
The schema should include the following tables:
- Products
- Categories
- Inventory Transactions
- Users
- Roles"
Prioritize the tasks appropriately and set the first two tasks as "in_progress".
With this prompt, an AI agent with access to the Valkey MCP Task Management Server would:
- Create a new plan with application_id "inventory-manager" and the specified Markdown-formatted notes
- Add the five specified tasks to the plan
- Add detailed Markdown-formatted notes to the database schema task
- Set appropriate priorities for each task
- Update the status of the first two tasks to "in_progress"
- Return a summary of the created plan and tasks
Developer Documentation
For information on how to set up a development environment, contribute to the project, and understand the codebase structure, please refer to the Developer Guide.
For contribution guidelines, including commit message format and pull request process, see Contributing Guidelines.
License
This project is licensed under the BSD-3-Clause License.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。