MCP System Monitor Server

MCP System Monitor Server

Enables real-time monitoring of system resources including CPU, GPU (NVIDIA, Apple Silicon, AMD/Intel), memory, disk, network, and processes across Windows, macOS, and Linux platforms through natural language queries.

Category
访问服务器

README

MCP System Monitor Server

A cross-platform MCP (Model Context Protocol) server that provides comprehensive real-time system monitoring capabilities for LLMs. Built with FastMCP for easy integration with Claude Desktop and other MCP-compatible clients.

Features

System Monitoring

Basic System Monitoring:

  • CPU Monitoring: Real-time usage, per-core statistics, frequency, temperature, detailed processor information (model, vendor, architecture, cache sizes)
  • GPU Monitoring: Multi-vendor GPU support (NVIDIA with full metrics, Apple Silicon with comprehensive support including unified memory and core count, AMD/Intel with basic info)
  • Memory Monitoring: RAM and swap usage, availability statistics
  • Disk Monitoring: Space usage, filesystem information for all mounted drives
  • Network Statistics: Interface-level traffic and error counters
  • Process Monitoring: Top processes by CPU/memory usage
  • System Information: OS details, hostname, uptime, architecture

Phase 1 Performance Monitoring:

  • I/O Performance: Detailed disk I/O metrics, read/write rates, per-disk statistics, busy time analysis
  • System Load: Load averages (1m, 5m, 15m), context switches, interrupts, running/blocked processes
  • Enhanced Memory: Detailed memory statistics including buffers, cache, active/inactive memory, page faults, swap activity
  • Enhanced Network: Network performance metrics with transfer rates, errors, drops, interface speed and MTU

MCP Tools Available

Basic System Monitoring (9 tools):

  • get_current_datetime: Get the current local datetime in ISO format
  • get_cpu_info: Get current CPU usage and statistics
  • get_gpu_info: Get GPU information for all detected GPUs
  • get_memory_info: Get RAM and swap usage
  • get_disk_info: Get disk usage for all mounted drives
  • get_system_snapshot: Get complete system state in one call
  • monitor_cpu_usage: Monitor CPU usage over a specified duration
  • get_top_processes: Get top processes by CPU or memory usage
  • get_network_stats: Get network interface statistics

Phase 1 Performance Monitoring (6 tools):

  • get_io_performance: Get detailed I/O performance metrics and rates
  • get_system_load: Get system load averages and process statistics
  • get_enhanced_memory_info: Get detailed memory statistics with caches/buffers
  • get_enhanced_network_stats: Get enhanced network performance metrics
  • get_performance_snapshot: Get complete performance monitoring snapshot
  • monitor_io_performance: Monitor I/O performance over specified duration with trend analysis

MCP Resources

Basic System Resources (3 resources):

  • system://live/cpu: Live CPU usage data
  • system://live/memory: Live memory usage data
  • system://config: System configuration and hardware information

Phase 1 Performance Resources (3 resources):

  • system://performance/io: Live I/O performance data
  • system://performance/load: Live system load data
  • system://performance/network: Live network performance data

GPU Support Details

NVIDIA GPUs:

  • Full metrics: usage percentage, memory (used/total), temperature, power consumption
  • Supports multiple NVIDIA GPUs
  • Requires NVIDIA drivers and NVML libraries

Apple Silicon GPUs:

  • Comprehensive support for M1, M2, and M3 chips
  • GPU core count detection
  • Unified memory reporting (shares system RAM)
  • Metal API support detection
  • Temperature monitoring (when available)

AMD/Intel GPUs:

  • Basic detection and identification
  • Limited metrics depending on platform and drivers

Requirements

  • Python 3.10+
  • Windows, macOS, or Linux
  • GPU (optional): NVIDIA GPUs for full metrics, Apple Silicon GPUs fully supported on macOS

Installation

From GitHub

  1. Clone the repository:

    git clone https://github.com/huhabla/mcp-system-monitor.git
    cd mcp-system-monitor
    
  2. Install dependencies using uv (recommended):

    uv pip install -e .
    

    Or using pip:

    pip install -e .
    

Optional Dependencies

For Windows-specific features:

pip install mcp-system-monitor[win32]

Usage

Development Mode

Test the server with the MCP Inspector:

uv run mcp dev mcp_system_monitor_server.py

Claude Desktop Integration

Install the server in Claude Desktop:

uv run mcp install mcp_system_monitor_server.py --name "System Monitor"

Direct Execution

Run the server directly:

python mcp_system_monitor_server.py

MCP Servers Json Config

Modify the following JSON template to set the path to the MCP server in your MCP client for Windows:

{
  "mcpServers": {
    "mpc-system-monitor": {
      "command": "cmd",
      "args": [
        "/c",
        "C:/Users/Sören Gebbert/Documents/GitHub/mcp-system-monitor/start_mpc_system_monitor.bat"
      ]
    }
  }
}

Modify the following JSON template to set the path to the MCP server in your MCP client for MacOS:

{
  "mcpServers": {
    "mpc-system-monitor": {
      "command": "/bin/zsh",
      "args": [
        "/Users/holistech/Documents/GitHub/mcp-system-monitor/start_mcp_system_monitor.sh"
      ]
    }
  }
}

Example Tool Usage

Once connected to Claude Desktop or another MCP client, you can use natural language to interact with the system monitor:

Basic System Monitoring:

  • "Show me the current CPU usage"
  • "What's my GPU temperature?"
  • "How many GPU cores does my Apple M1 Max have?"
  • "Show me GPU memory usage and whether it's unified memory"
  • "How much disk space is available?"
  • "Monitor CPU usage for the next 10 seconds"
  • "Show me the top 5 processes by memory usage"
  • "Get a complete system snapshot"

Phase 1 Performance Monitoring:

  • "Show me detailed I/O performance metrics"
  • "What's the current system load average?"
  • "Monitor I/O performance for the next 30 seconds"
  • "Show me enhanced memory statistics with cache information"
  • "Get detailed network performance metrics"
  • "Give me a complete performance snapshot"

Architecture

The server uses a modular collector-based architecture:

  • BaseCollector: Abstract base class providing caching and async data collection
  • Specialized Collectors: CPU, GPU, Memory, Disk, Network, Process, and System collectors
  • Phase 1 Performance Collectors: IOPerformance, SystemLoad, EnhancedMemory, and EnhancedNetwork collectors
  • Pydantic Models: Type-safe data models for all system information
  • FastMCP Integration: Simple decorators for exposing tools and resources

Caching Strategy

All collectors implement intelligent caching to:

  • Reduce system overhead from frequent polling
  • Provide consistent data within time windows
  • Allow configurable cache expiration

Testing

Comprehensive Test Suite

The project includes a comprehensive test suite with 100% coverage of all MCP tools, resources, and collectors:

Test Organization:

  • test_mcp_system_monitor_server.py - Original basic collector tests
  • test_mcp_system_monitor_server_comprehensive.py - Comprehensive MCP tools/resources tests
  • test_mcp_server_integration.py - Integration tests for MCP server protocol compliance
  • test_architecture_agnostic.py - Cross-platform tests focusing on data contracts
  • conftest.py - Test configuration, fixtures, and mocking utilities

Running Tests

Run all tests:

pytest

Run tests by category:

pytest -m unit              # Fast unit tests only
pytest -m integration       # Integration tests only
pytest -m agnostic          # Architecture/OS agnostic tests
pytest -m "not slow"        # Exclude slow tests
pytest -m "unit and not slow"  # Fast unit tests for CI

Run specific test suites:

pytest tests/test_mcp_system_monitor_server_comprehensive.py  # All MCP endpoints
pytest tests/test_mcp_server_integration.py                  # Integration tests
pytest tests/test_architecture_agnostic.py                   # Cross-platform tests

Run with coverage:

pytest --cov=mcp_system_monitor_server --cov-report=html

Test Coverage

Complete Coverage:

  • 15 MCP Tools (9 basic + 6 Phase 1 performance)
  • 6 MCP Resources (3 basic + 3 Phase 1 performance)
  • 11 Collectors (7 basic + 4 Phase 1 performance)
  • Cross-platform compatibility testing
  • Performance benchmarking and stress testing
  • Error handling and edge case validation

Performance Benchmarks:

  • System snapshot collection: < 5 seconds
  • Individual tool calls: < 1 second each
  • Concurrent operations: 20 parallel calls < 10 seconds

Platform Support

Feature Windows macOS Linux
CPU Monitoring
GPU Monitoring (NVIDIA)
GPU Monitoring (AMD) ⚠️ ⚠️
GPU Monitoring (Intel) ⚠️ ⚠️
GPU Monitoring (Apple)
Memory Monitoring
Disk Monitoring
Network Statistics
Process Monitoring
CPU Temperature ⚠️ ⚠️

⚠️ = Limited support, depends on hardware/drivers

Troubleshooting

GPU Monitoring Not Working

NVIDIA GPUs:

  • Ensure NVIDIA drivers are installed
  • Check if nvidia-smi command works
  • The server will gracefully handle missing GPU libraries

Apple Silicon GPUs:

  • Supported on macOS with M1, M2, and M3 chips
  • Provides comprehensive information including unified memory and GPU core count
  • Uses system_profiler command (available by default on macOS)

Permission Errors

  • Some system information may require elevated privileges
  • The server handles permission errors gracefully and skips inaccessible resources

High CPU Usage

  • Adjust the monitoring frequency by modifying collector update intervals
  • Use cached data methods to reduce system calls
  • Default cache expiration is 2 seconds for most collectors
  • Consider increasing max_age parameter in get_cached_data() calls for less frequent updates

Performance Considerations

  • The server uses intelligent caching to minimize system calls
  • Each collector maintains its own cache with configurable expiration
  • Continuous monitoring tools (like monitor_cpu_usage) bypass caching for real-time data
  • For high-frequency polling, consider using the resource endpoints which leverage caching

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选