Logstash MCP Server
A Model Context Protocol server that provides comprehensive tools for monitoring and identifying performance bottlenecks in Logstash instances through an interactive web UI and JSON-RPC interface.
README
IMPORTANT
This repository is vibe coded, AI generated and not tested properly. Use it with your own risk.
Logstash MCP Server
A Model Context Protocol (MCP) server for interacting with Logstash instances. This server provides comprehensive tools for monitoring and defining Logstash instance performance bottleneck.
Web UI
The project includes a web-based user interface for easy interaction with your Logstash instance.
Running the Web UI
- Start the web interface:
python3 web_ui.py
- Open your browser and navigate to:
http://localhost:5001

Web UI Features
The web interface provides:
- Interactive Dashboard: Visual interface to access all Logstash monitoring tools
- Real-time Monitoring: Check connectivity, node stats, and pipeline performance
- Health Analysis: Comprehensive health checks with visual feedback
- Pipeline Management: View statistics for individual or all pipelines
- Performance Debugging: Hot threads analysis and JVM statistics
- Plugin Management: Browse installed Logstash plugins
Web UI Configuration
The web UI uses the same configuration as the MCP server:
- Default Logstash URL:
http://localhost:9600 - Override with:
LOGSTASH_API_BASEenvironment variable - Web interface runs on:
http://localhost:5001
Example with custom Logstash URL:
export LOGSTASH_API_BASE="http://your-logstash-host:9600"
python3 web_ui.py
Features
Monitoring Tools
- Node Information: Get Logstash version, build info, and settings
- Node Statistics: JVM, process, and pipeline metrics
- Pipeline Statistics: Monitor individual or all pipeline performance
- Hot Threads: Debug performance issues with thread analysis
- Health Check: Comprehensive health assessment with recommendations
- Connectivity Check: Verify connection to Logstash with detailed diagnostics
Management Tools
- Pipeline Reload: Reload specific pipeline configurations
- Plugin Listing: View all installed Logstash plugins
- JVM Statistics: Detailed memory and garbage collection metrics
- Grok Patterns: List available Grok patterns for log parsing
Installation
- Install dependencies:
pip install -r requirements.txt
- Set up environment variables (optional):
export LOGSTASH_API_BASE="http://your-logstash-host:9600"
Configuration
The server uses the following default configuration:
- Logstash Host: localhost
- Logstash Port: 9600
- API Base URL: http://localhost:9600
You can override the API base URL using the LOGSTASH_API_BASE environment variable.
Available Tools (12 Total)
logstash_check_connectivity
Check connectivity to the Logstash instance with detailed connection status, response times, and error handling.
- Returns: Connection status, URL, version, host, response time, error details, and troubleshooting suggestions
logstash_node_info
Get Logstash node information including version, build info, and settings.
logstash_node_stats
Get comprehensive node statistics including JVM, process, and pipeline metrics.
- Parameters:
human(boolean, default: true)
logstash_pipelines_stats
Get statistics for all Logstash pipelines.
- Parameters:
human(boolean, default: true)
logstash_pipeline_stats
Get statistics for a specific pipeline.
- Parameters:
id(string, required),human(boolean, default: true)
logstash_hot_threads
Get hot threads information for debugging performance issues.
- Parameters:
threads(integer, default: 3),human(boolean, default: true)
logstash_plugins
List all installed Logstash plugins.
check_backpressure
Check queue backpressure metrics to monitor pipeline performance and congestion.
- Parameters:
human(boolean, default: true)
logstash_health_check
Perform comprehensive health check with analysis and recommendations.
logstash_jvm_stats
Get detailed JVM statistics for memory analysis.
- Parameters:
human(boolean, default: true)
logstash_health_report
Get detailed health report from Logstash.
flow_metrics
Get detailed flow metrics including throughput, backpressure, and worker concurrency.
- Parameters:
human(boolean, default: true)
Health Check Analysis
The health check tool analyzes:
- Connectivity Verification: Tests connection to Logstash before other checks
- JVM Memory Usage: Warns if heap usage exceeds 80%
- Pipeline Performance: Detects pipelines with filtered but no output events
- Queue Usage: Identifies large queue sizes that may impact performance
Quick Start Commands
After starting the server with python3 logstash_mcp_server.py, use these JSON-RPC commands:
1. Initialize (Required First)
{"jsonrpc": "2.0", "id": 0, "method": "initialize", "params": {"protocolVersion": "2024-11-05", "capabilities": {}, "clientInfo": {"name": "test-client", "version": "1.0.0"}}}
2. Check Connectivity
{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "logstash_check_connectivity", "arguments": {}}}
3. Health Check
{"jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": {"name": "logstash_health_check", "arguments": {}}}
4. List All Tools
{"jsonrpc": "2.0", "id": 3, "method": "tools/list"}
5. Get Node Info
{"jsonrpc": "2.0", "id": 4, "method": "tools/call", "params": {"name": "logstash_node_info", "arguments": {}}}
Usage Examples
Basic Health Check
# The MCP server will automatically analyze:
# - JVM memory usage
# - Pipeline performance
# - Queue statistics
# - Provide recommendations for optimization
Pipeline Monitoring
# Monitor specific pipeline performance
# Get detailed statistics for troubleshooting
# Track event processing rates and errors
Performance Debugging
# Use hot threads analysis to identify bottlenecks
# Monitor JVM statistics for memory issues
# Track pipeline queue usage
Integration with ELK Stack
This MCP server is designed to work alongside Elasticsearch diagnostics and can help:
- Monitor Logstash performance feeding into your Elasticsearch cluster
- Identify pipeline bottlenecks that may contribute to indexing delays
- Optimize Logstash configuration for better cluster performance
Based on your Elasticsearch cluster analysis showing high shard counts, ensure your Logstash pipelines are optimized for efficient indexing patterns.
Error Handling
The server includes comprehensive error handling for:
- Connection failures to Logstash API
- Invalid pipeline IDs
- API response errors
- Network timeouts
- Detailed error messages with troubleshooting suggestions
Testing
Run the test suite to verify everything works:
python3 test_mcp_server.py
The test suite includes:
- Server initialization tests
- Tool listing verification
- Mocked health check tests
- Error handling validation
Security Considerations
- The server connects to Logstash API endpoints
- Ensure proper network security between MCP server and Logstash
- Consider authentication if your Logstash instance requires it
- Monitor API access logs for security auditing
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。