MCP Background Job Server

MCP Background Job Server

An MCP server that enables coding agents to execute and manage long-running shell commands asynchronously with capabilities for process monitoring, interaction, and lifecycle management.

Category
访问服务器

README

MCP Background Job Server

Python 3.12+ FastMCP PyPI version License

An MCP (Model Context Protocol) server that enables coding agents to execute long-running shell commands asynchronously with full process management capabilities.

Overview

The MCP Background Job Server provides a robust solution for running shell commands in the background, allowing agents to start processes, monitor their status, interact with them, and manage their lifecycle. This is particularly useful for development workflows involving build processes, test suites, servers, or any long-running operations.

Features

  • Asynchronous Process Execution: Execute shell commands as background jobs with unique job IDs
  • Process Lifecycle Management: Start, monitor, interact with, and terminate background processes
  • Real-time Output Monitoring: Capture and retrieve stdout/stderr with buffering and tailing capabilities
  • Interactive Process Support: Send input to running processes via stdin
  • Resource Management: Configurable job limits and automatic cleanup of completed processes
  • MCP Protocol Integration: Full integration with Model Context Protocol for agent interactions

Installation

Quick Install (Recommended)

Install directly from PyPI using uvx:

# Install and run the MCP server
uvx mcp-background-job

Claude Code Integration

Add the server to your Claude Code configuration:

  1. Option A: Using Claude Code Desktop

    • Open Claude Code settings/preferences
    • Navigate to MCP Servers section
    • Add a new server:
      • Name: background-job
      • Command: uvx
      • Args: ["mcp-background-job"]
  2. Option B: Configuration File Add to your Claude Code configuration file:

    {
      "mcpServers": {
        "background-job": {
          "command": "uvx",
          "args": ["mcp-background-job"]
        }
      }
    }
    
  3. Restart Claude Code to load the new MCP server.

Development Setup

For local development or contributing:

Prerequisites

  • Python 3.12 or higher
  • uv package manager

Setup Steps

  1. Clone and navigate to the project directory:

    git clone https://github.com/dylan-gluck/mcp-background-job.git
    cd mcp-background-job
    
  2. Install dependencies:

    uv sync
    
  3. Install in development mode:

    uv add -e .
    

Quick Start

Using with Claude Code

Once configured, ask Claude to help you with background tasks:

You: "Start my development server in the background and monitor it"

Claude: I'll start your development server using the background job server.

[Uses the execute tool to run your dev server]
[Shows job ID and monitors startup progress]
[Provides status updates]

Claude: "Your development server is now running on http://localhost:3000. 
The job ID is abc123-def456 if you need to control it later."

Manual Server Usage

For development or direct usage:

# Run with stdio transport (most common)
uvx mcp-background-job

# Or for development:
uv run python -m mcp_background_job

Basic Usage Example

# 1. Execute a long-running command
execute_result = await execute_command("npm run dev")
job_id = execute_result.job_id

# 2. Check job status
status = await get_job_status(job_id)
print(f"Job status: {status.status}")

# 3. Get recent output
output = await tail_job_output(job_id, lines=20)
print("Recent output:", output.stdout)

# 4. Interact with the process
interaction = await interact_with_job(job_id, "some input\n")
print("Process response:", interaction.stdout)

# 5. Kill the job when done
result = await kill_job(job_id)
print(f"Kill result: {result.status}")

MCP Tools Reference

The server exposes 7 MCP tools for process management:

Read-only Tools

Tool Description Parameters Returns
list List all background jobs None {jobs: [JobSummary]}
status Get job status job_id: str {status: JobStatus}
output Get complete job output job_id: str {stdout: str, stderr: str}
tail Get recent output lines job_id: str, lines: int {stdout: str, stderr: str}

Interactive Tools

Tool Description Parameters Returns
execute Start new background job command: str {job_id: str}
interact Send input to job stdin job_id: str, input: str {stdout: str, stderr: str}
kill Terminate running job job_id: str {status: str}

Job Status Values

  • running - Process is currently executing
  • completed - Process finished successfully
  • failed - Process terminated with error
  • killed - Process was terminated by user

Configuration

Environment Variables

Configure the server behavior using these environment variables:

# Maximum concurrent jobs (default: 10)
export MCP_BG_MAX_JOBS=20

# Maximum output buffer per job (default: 10MB)
export MCP_BG_MAX_OUTPUT_SIZE=20MB
# or in bytes:
export MCP_BG_MAX_OUTPUT_SIZE=20971520

# Default job timeout in seconds (default: no timeout)
export MCP_BG_JOB_TIMEOUT=3600

# Cleanup interval for completed jobs in seconds (default: 300)
export MCP_BG_CLEANUP_INTERVAL=600

# Working directory for jobs (default: current directory)
export MCP_BG_WORKING_DIR=/path/to/project

# Allowed command patterns (optional security restriction)
export MCP_BG_ALLOWED_COMMANDS="^npm ,^python ,^echo ,^ls"

Claude Code Configuration with Environment Variables

{
  "mcpServers": {
    "background-job": {
      "command": "uvx",
      "args": ["mcp-background-job"],
      "env": {
        "MCP_BG_MAX_JOBS": "20",
        "MCP_BG_MAX_OUTPUT_SIZE": "20MB"
      }
    }
  }
}

Programmatic Configuration

from mcp_background_job.config import BackgroundJobConfig

config = BackgroundJobConfig(
    max_concurrent_jobs=20,
    max_output_size_bytes=20 * 1024 * 1024,  # 20MB
    default_job_timeout=7200,  # 2 hours
    cleanup_interval_seconds=600  # 10 minutes
)

Architecture

The server is built with a modular architecture:

  • JobManager: Central service for job lifecycle management
  • ProcessWrapper: Abstraction layer for subprocess handling with I/O buffering
  • FastMCP Server: MCP protocol implementation with tool definitions
  • Pydantic Models: Type-safe data validation and serialization

Key Components

src/mcp_background_job/
├── server.py          # FastMCP server and tool definitions
├── service.py         # JobManager service implementation  
├── process.py         # ProcessWrapper for subprocess management
├── models.py          # Pydantic data models
├── config.py          # Configuration management
└── logging_config.py  # Logging setup

Development

Running Tests

# Run all tests
uv run pytest tests/

# Run unit tests only
uv run pytest tests/unit/ -v

# Run integration tests only  
uv run pytest tests/integration/ -v

Code Formatting

# Format code with ruff
uv run ruff format

# Run type checking
uv run mypy src/

Development Workflow

  1. Make your changes
  2. Run tests: uv run pytest tests/
  3. Format code: uv run ruff format
  4. Commit changes

Examples

Development Server Workflow

# Start a development server
job_id=$(echo '{"command": "npm run dev"}' | mcp-tool execute)

# Monitor the startup
mcp-tool tail --job_id "$job_id" --lines 10

# Check if server is ready
mcp-tool status --job_id "$job_id"

# Stop the server
mcp-tool kill --job_id "$job_id"

Long-running Build Process

# Start a build process
job_id=$(echo '{"command": "docker build -t myapp ."}' | mcp-tool execute)

# Monitor build progress
while true; do
  status=$(mcp-tool status --job_id "$job_id")
  if [[ "$status" != "running" ]]; then break; fi
  mcp-tool tail --job_id "$job_id" --lines 5
  sleep 10
done

# Get final build output
mcp-tool output --job_id "$job_id"

Interactive Process Example

# Start Python REPL
job_id=$(echo '{"command": "python -i"}' | mcp-tool execute)

# Send Python code
mcp-tool interact --job_id "$job_id" --input "print('Hello, World!')\n"

# Send more commands
mcp-tool interact --job_id "$job_id" --input "import sys; print(sys.version)\n"

# Exit REPL
mcp-tool interact --job_id "$job_id" --input "exit()\n"

Security Considerations

  • Process Isolation: Each job runs as a separate subprocess
  • Resource Limits: Configurable limits on concurrent jobs and memory usage
  • Input Validation: All parameters are validated using Pydantic models
  • Command Restrictions: Consider implementing command allowlists in production
  • Output Sanitization: Be aware that process output may contain sensitive information

Transport Support

The server supports multiple MCP transports:

  • stdio: Default transport for local development and agent integration
  • HTTP: For remote access (requires additional setup)

For stdio transport, ensure logging goes to stderr only to avoid protocol conflicts.

Troubleshooting

Common Issues

Import Errors: Ensure the package is installed in development mode:

uv add -e .

Tests Not Running: Install the package first, then run tests:

uv sync
uv add -e .
uv run pytest tests/

Permission Errors: Ensure proper permissions for the commands you're trying to execute.

Memory Issues: Adjust MCP_BG_MAX_OUTPUT_SIZE if dealing with processes that generate large amounts of output.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with tests
  4. Run the test suite and formatting
  5. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Changelog

v0.1.1

  • Published to PyPI for easy installation via uvx
  • Added console script entry point (mcp-background-job)
  • Updated documentation with installation and usage instructions
  • Fixed linting issues and improved code quality

v0.1.0

  • Initial implementation with full MCP tool support
  • Process lifecycle management
  • Configurable resource limits
  • Comprehensive test suite

Built with ❤️ using FastMCP and Python 3.12+

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选