MCP Task

MCP Task

Async MCP server for running long-running AI tasks with real-time progress monitoring, enabling users to start, monitor, and manage complex AI workflows across multiple models.

Category
访问服务器

README

@just-every/mcp-task

npm version License: MIT

Async MCP server for running long-running AI tasks with real-time progress monitoring using @just-every/task.

Quick Start

1. Create or use an environment file

Option A: Create a new .llm.env file in your home directory:

# Download example env file
curl -o ~/.llm.env https://raw.githubusercontent.com/just-every/mcp-task/main/.env.example

# Edit with your API keys
nano ~/.llm.env

Option B: Use an existing .env file (must use absolute path):

# Example: /Users/yourname/projects/myproject/.env
# Example: /home/yourname/workspace/.env

2. Install

Claude Code

# Using ~/.llm.env
claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task

# Using existing .env file (absolute path required)
claude mcp add task -s user -e ENV_FILE=/absolute/path/to/your/.env -- npx -y @just-every/mcp-task

# For debugging, check if ENV_FILE is being passed correctly:
claude mcp list

Other MCP Clients

Add to your MCP configuration:

{
  "mcpServers": {
    "task": {
      "command": "npx",
      "args": ["-y", "@just-every/mcp-task"],
      "env": {
        "ENV_FILE": "/path/to/.llm.env"
      }
    }
  }
}

Available Tools

run_task

Start a long-running AI task asynchronously. Returns a task ID immediately.

Parameters:

  • task (required): The task prompt - what to perform
  • model (optional): Model class or specific model name
  • context (optional): Background context for the task
  • output (optional): The desired output/success state

Returns: Task ID for monitoring progress

check_task_status

Check the status of a running task with real-time progress updates.

Parameters:

  • task_id (required): The task ID returned from run_task

Returns: Current status, progress summary, recent events, and tool calls

get_task_result

Get the final result of a completed task.

Parameters:

  • task_id (required): The task ID returned from run_task

Returns: The complete output from the task

cancel_task

Cancel a pending or running task.

Parameters:

  • task_id (required): The task ID to cancel

Returns: Cancellation status

list_tasks

List all tasks with their current status.

Parameters:

  • status_filter (optional): Filter by status (pending, running, completed, failed, cancelled)

Returns: Task statistics and summaries

Example Workflow

// 1. Start a task
const startResponse = await callTool('run_task', {
  "model": "standard",
  "task": "Search for the latest AI news and summarize",
  "output": "A bullet-point summary of 5 recent AI developments"
});
// Returns: { "task_id": "abc-123", "status": "pending", ... }

// 2. Check progress
const statusResponse = await callTool('check_task_status', {
  "task_id": "abc-123"
});
// Returns: { "status": "running", "progress": "Searching for AI news...", ... }

// 3. Get result when complete
const resultResponse = await callTool('get_task_result', {
  "task_id": "abc-123"
});
// Returns: The complete summary

Supported Models

Model Classes

  • reasoning: Complex reasoning and analysis
  • vision: Image and visual processing
  • standard: General purpose tasks
  • mini: Lightweight, fast responses
  • reasoning_mini: Lightweight reasoning
  • code: Code generation and analysis
  • writing: Creative and professional writing
  • summary: Text summarization
  • vision_mini: Lightweight vision processing
  • long: Long-form content generation

Popular Models

  • claude-opus-4: Anthropic's most powerful model
  • grok-4: xAI's latest Grok model
  • gemini-2.5-pro: Google's Gemini Pro
  • o3, o3-pro: OpenAI's o3 models
  • And any other model name supported by @just-every/ensemble

Integrated Tools

Tasks have access to:

  • Web Search: Search the web for information using @just-every/search
  • Command Execution: Run shell commands via the run_command tool

API Keys

The task runner requires API keys for the AI models you want to use. Add them to your .llm.env file:

# Core AI Models
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key  
XAI_API_KEY=your-xai-key           # For Grok models
GOOGLE_API_KEY=your-google-key     # For Gemini models

# Search Providers (optional, for web_search tool)
BRAVE_API_KEY=your-brave-key
SERPER_API_KEY=your-serper-key
PERPLEXITY_API_KEY=your-perplexity-key
OPENROUTER_API_KEY=your-openrouter-key

Getting API Keys

Task Lifecycle

  1. Pending: Task created and queued
  2. Running: Task is being executed with live progress via taskStatus()
  3. Completed: Task finished successfully
  4. Failed: Task encountered an error
  5. Cancelled: Task was cancelled by user

Tasks are automatically cleaned up after 24 hours.

CLI Usage

The task runner can also be used directly from the command line:

# Run as MCP server (for debugging)
ENV_FILE=~/.llm.env npx @just-every/mcp-task

# Or if installed globally
npm install -g @just-every/mcp-task
ENV_FILE=~/.llm.env mcp-task serve

Configuration

Task Timeout Settings

The server includes robust safety mechanisms to prevent tasks from getting stuck. All timeouts are configurable via environment variables:

# Default production settings (optimized for long-running tasks)
TASK_TIMEOUT=18000000             # 5 hours max runtime (default)
TASK_STUCK_THRESHOLD=300000       # 5 minutes inactivity = stuck (default)
TASK_HEALTH_CHECK_INTERVAL=60000  # Check every 1 minute (default)

# For shorter tasks, you might prefer:
TASK_TIMEOUT=300000               # 5 minutes max runtime
TASK_STUCK_THRESHOLD=60000        # 1 minute inactivity
TASK_HEALTH_CHECK_INTERVAL=15000  # Check every 15 seconds

# Add to your .llm.env or pass as environment variables

Safety Features:

  • Automatic timeout: Tasks exceeding TASK_TIMEOUT are automatically failed
  • Inactivity detection: Tasks with no activity for TASK_STUCK_THRESHOLD are marked as stuck
  • Health monitoring: Regular checks every TASK_HEALTH_CHECK_INTERVAL ensure tasks are progressing
  • Error recovery: Uncaught exceptions and promise rejections are handled gracefully

Development

Setup

# Clone the repository
git clone https://github.com/just-every/mcp-task.git
cd mcp-task

# Install dependencies
npm install

# Build for production
npm run build

Development Mode

# Run in development mode with your env file
ENV_FILE=~/.llm.env npm run serve:dev

Testing

# Run tests
npm test

# Type checking
npm run typecheck

# Linting
npm run lint

Architecture

mcp-task/
├── src/
│   ├── serve.ts            # MCP server implementation
│   ├── index.ts            # CLI entry point
│   └── utils/
│       ├── task-manager.ts # Async task lifecycle management
│       └── logger.ts       # Logging utilities
├── bin/
│   └── mcp-task.js         # Executable entry
└── package.json

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

Troubleshooting

MCP Server Shows "Failed" in Claude

If you see "task ✘ failed" in Claude, check these common issues:

  1. Missing API Keys: The most common issue is missing API keys. Check that your ENV_FILE is properly configured:

    # Test if ENV_FILE is working
    ENV_FILE=/path/to/your/.llm.env npx @just-every/mcp-task
    
  2. Incorrect Installation Command: Make sure you're using -e for environment variables:

    # Correct - environment variable passed with -e flag before --
    claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task
    
    # Incorrect - trying to pass as argument
    claude mcp add task -s user -- npx -y @just-every/mcp-task --env ENV_FILE=$HOME/.llm.env
    
  3. Path Issues: ENV_FILE must use absolute paths:

    # Good
    ENV_FILE=/Users/yourname/.llm.env
    ENV_FILE=$HOME/.llm.env
    
    # Bad
    ENV_FILE=.env
    ENV_FILE=~/.llm.env  # ~ not expanded in some contexts
    
  4. Verify Installation: Check your MCP configuration:

    claude mcp list
    
  5. Debug Mode: For detailed error messages, run manually:

    ENV_FILE=/path/to/.llm.env npx @just-every/mcp-task
    

Task Not Progressing

  • Check task status with check_task_status to see live progress
  • Look for error messages prefixed with "ERROR:" in the output
  • Verify API keys are properly configured

Model Not Found

  • Ensure model name is correctly spelled
  • Check that required API keys are set for the model provider
  • Popular models: claude-opus-4, grok-4, gemini-2.5-pro, o3

Task Cleanup

  • Completed tasks are automatically cleaned up after 24 hours
  • Use list_tasks to see all active and recent tasks
  • Cancel stuck tasks with cancel_task

License

MIT

Author

Created by Just Every - Building powerful AI tools for developers.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选