Kibana MCP Server
Provides seamless access to Kibana and Periscope logs through a unified API with KQL and SQL querying, AI-powered log analysis, and support for searching across 1.3+ billion logs in 9 indexes.
README
🔍 Kibana MCP (Model Context Protocol) Server
A powerful, high-performance server that provides seamless access to Kibana and Periscope logs through a unified API. Built with modular architecture, in-memory caching, HTTP/2 support, and OpenTelemetry tracing.
📋 Table of Contents
- Overview
- Features
- What's New in v2.0.0
- Setup
- Authentication
- Running the Server
- API Reference
- Available Indexes
- Example Usage
- Troubleshooting
- Performance Features
- Architecture
- AI Integration
- License
🌟 Overview
This project bridges the gap between your applications and Kibana/Periscope logs by providing:
- Modular Architecture: Clean separation of concerns with dedicated modules for clients, services, and API layers
- Dual Interface Support: Both Kibana (KQL) and Periscope (SQL) querying
- Multi-Index Access: Query across 9 different log indexes (1.3+ billion logs)
- Performance Optimized: In-memory caching, HTTP/2, and connection pooling
- Timezone-Aware: Full support for international timezones (IST, UTC, PST, etc.)
- Production-Ready: Comprehensive error handling, retry logic, and observability
✨ Features
Core Features
- Simple API: Easy-to-use RESTful endpoints for log searching and analysis
- Dual Log System Support:
- Kibana: KQL-based querying for application logs
- Periscope: SQL-based querying for HTTP access logs
- Multi-Index Support: Access to 9 indexes with 1.3+ billion logs
- Flexible Authentication: API-based token management for both Kibana and Periscope
- Time-Based Searching: Absolute and relative time ranges with full timezone support
- Real-Time Streaming: Monitor logs as they arrive
Performance Features (New in v2.0.0)
- ⚡ In-Memory Caching:
- Schema cache: 1 hour TTL
- Search cache: 5 minutes TTL
- 🚀 HTTP/2 Support: Multiplexed connections for faster requests
- 🔄 Connection Pooling: 200 max connections, 50 keepalive
- 📊 OpenTelemetry Tracing: Distributed tracing for monitoring and debugging
- 🌍 Timezone-Aware: Support for any IANA timezone without manual UTC conversion
AI & Analysis Features
- 🧠 AI-Powered Analysis: Intelligent log summarization using Neurolink
- Smart Chunking: Automatic handling of large log sets
- Pattern Analysis: Tools to identify log patterns and extract errors
- Cross-Index Correlation: Track requests across multiple log sources
🆕 What's New in v2.0.0
Modular Architecture
- ✅ Clean separation:
clients/,services/,api/,models/,utils/ - ✅ Improved testability and maintainability
- ✅ Better error handling and logging
- ✅ Type-safe with Pydantic models
Performance Enhancements
- ✅ In-memory caching reduces API calls
- ✅ HTTP/2 support for better throughput
- ✅ Connection pooling for efficiency
- ✅ OpenTelemetry tracing for observability
Multi-Index Support
- ✅ 9 indexes accessible (7 with active data)
- ✅ 1.3+ billion logs available
- ✅ Index discovery and selection API
- ✅ Universal
timestampfield compatibility
Enhanced Timezone Support
- ✅ Periscope queries with timezone parameter
- ✅ No manual UTC conversion needed
- ✅ Support for IST, UTC, PST, and all IANA timezones
Configuration Improvements
- ✅ Optimized
config.yaml(36% smaller) - ✅ Dynamic configuration via API
- ✅ Only essential parameters included
🚀 Setup
Prerequisites
- Python 3.8+
- Access to Kibana instance (for Kibana features)
- Access to Periscope instance (optional, for Periscope features)
- Authentication tokens for the services you want to use
Installation
-
Clone this repository:
git clone https://github.com/gaharivatsa/KIBANA_SERVER.git cd KIBANA_SERVER -
Create a virtual environment:
python -m venv KIBANA_E # On macOS/Linux source KIBANA_E/bin/activate # On Windows KIBANA_E\Scripts\activate -
Install dependencies:
pip install -r requirements.txt -
Make the start script executable:
chmod +x ./run_kibana_mcp.sh -
Optional: Set up AI-powered log analysis:
# Install Node.js if not already installed (required for Neurolink) # Visit https://nodejs.org/ or use your package manager # Set your AI provider API key export GOOGLE_AI_API_KEY="your-google-ai-api-key" # Recommended (free tier) # OR export OPENAI_API_KEY="your-openai-key" # Neurolink will be automatically set up when you start the server
Configuration
The server comes with an optimized config.yaml that works out of the box. Key settings:
elasticsearch:
host: "" # Set via API or environment
timestamp_field: "timestamp" # ✅ Works for ALL 9 indexes
verify_ssl: true
mcp_server:
host: "0.0.0.0"
port: 8000
log_level: "info"
periscope:
host: "" # Default: periscope.breezesdk.store
timeouts:
kibana_request_timeout: 30
Dynamic Configuration (optional):
curl -X POST http://localhost:8000/api/set_config \
-H "Content-Type: application/json" \
-d '{
"configs_to_set": {
"elasticsearch.host": "your-kibana.example.com",
"mcp_server.log_level": "debug"
}
}'
🔐 Authentication
Kibana Authentication
Set via API (Recommended):
curl -X POST http://localhost:8000/api/set_auth_token \
-H "Content-Type: application/json" \
-d '{"auth_token":"YOUR_KIBANA_JWT_TOKEN"}'
How to Get Your Token:
- Log in to Kibana in your browser
- Open developer tools (F12)
- Go to Application → Cookies
- Find the authentication cookie (e.g., JWT token)
- Copy the complete value
Periscope Authentication
curl -X POST http://localhost:8000/api/set_periscope_auth_token \
-H "Content-Type: application/json" \
-d '{"auth_token":"YOUR_PERISCOPE_AUTH_TOKEN"}'
How to Get Periscope Token:
- Log in to Periscope in your browser
- Open developer tools (F12)
- Go to Application → Cookies
- Find the
auth_tokenscookie - Copy its value (base64 encoded)
🖥️ Running the Server
Start the server:
./run_kibana_mcp.sh
The server will be available at http://localhost:8000
Health Check:
curl http://localhost:8000/api/health
Response:
{
"success": true,
"message": "Server is healthy",
"version": "2.0.0",
"status": "ok"
}
📡 API Reference
Kibana Endpoints
| Endpoint | Description | Method |
|---|---|---|
/api/health |
Health check | GET |
/api/set_auth_token |
Set Kibana authentication | POST |
/api/discover_indexes |
List available indexes | GET |
/api/set_current_index |
Select index for searches | POST |
/api/search_logs |
MAIN - Search logs with KQL | POST |
/api/get_recent_logs |
Get most recent logs | POST |
/api/extract_errors |
Extract error logs | POST |
/api/summarize_logs |
🧠 AI-powered analysis | POST |
Periscope Endpoints
| Endpoint | Description | Method |
|---|---|---|
/api/set_periscope_auth_token |
Set Periscope authentication | POST |
/api/get_periscope_streams |
List available streams | GET |
/api/get_periscope_stream_schema |
Get stream schema | POST |
/api/get_all_periscope_schemas |
Get all schemas | GET |
/api/search_periscope_logs |
MAIN - Search with SQL | POST |
/api/search_periscope_errors |
Find HTTP errors | POST |
Utility Endpoints
| Endpoint | Description | Method |
|---|---|---|
/api/set_config |
Dynamic configuration | POST |
🗂️ Available Indexes
The server provides access to 9 log indexes (7 with active data):
Active Indexes
| Index Pattern | Total Logs | Use Case | Key Fields |
|---|---|---|---|
| breeze-v2* | 1B+ (73.5%) | Backend API, payments | session_id, message, level |
| envoy-edge* | 137M+ (10%) | HTTP traffic, errors | response_code, path, duration |
| istio-logs-v2* | 137M+ (10%) | Service mesh | level, message |
| squid-logs* | 7M+ (0.5%) | Proxy traffic | level, message |
| wallet-lrw* | 887K+ (0.1%) | Wallet transactions | order_id, txn_uuid |
| analytics-dashboard-v2* | 336K+ | Analytics API | auth, headers |
| rewards-engine-v2* | 7.5K+ | Rewards system | level, message |
Empty Indexes
wallet-product-v2*- No datacore-ledger-v2*- No data
Total: ~1.3 Billion logs across all indexes
📝 Example Usage
1. Discover and Set Index
# Discover available indexes
curl -X GET http://localhost:8000/api/discover_indexes
# Response:
{
"success": true,
"indexes": ["breeze-v2*", "envoy-edge*", "istio-logs-v2*", ...],
"count": 9
}
# Set the index to use
curl -X POST http://localhost:8000/api/set_current_index \
-H "Content-Type: application/json" \
-d '{"index_pattern": "breeze-v2*"}'
2. Search Logs (Kibana)
Basic Search:
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "error OR exception",
"max_results": 50,
"sort_by": "timestamp",
"sort_order": "desc"
}'
Search with Time Range (Timezone-Aware):
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "payment AND failed",
"start_time": "2025-10-14T09:00:00+05:30",
"end_time": "2025-10-14T17:00:00+05:30",
"max_results": 100
}'
Session-Based Search:
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "PcuUFbLIPLlTbBMwQXl9Y",
"max_results": 200,
"sort_by": "timestamp",
"sort_order": "asc"
}'
3. Search Periscope Logs (SQL)
Find 5XX Errors:
curl -X POST http://localhost:8000/api/search_periscope_logs \
-H "Content-Type: application/json" \
-d '{
"sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\'' AND status_code < '\''600'\''",
"start_time": "1h",
"max_results": 50
}'
Search with Timezone (NEW!):
curl -X POST http://localhost:8000/api/search_periscope_logs \
-H "Content-Type: application/json" \
-d '{
"sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\''",
"start_time": "2025-10-14 09:00:00",
"end_time": "2025-10-14 13:00:00",
"timezone": "Asia/Kolkata",
"max_results": 100
}'
Quick Error Search:
curl -X POST http://localhost:8000/api/search_periscope_errors \
-H "Content-Type: application/json" \
-d '{
"hours": 1,
"stream": "envoy_logs",
"error_codes": "5%",
"timezone": "Asia/Kolkata"
}'
4. AI-Powered Analysis
curl -X POST http://localhost:8000/api/summarize_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "error",
"max_results": 50,
"start_time": "1h"
}'
Response (example):
{
"success": true,
"analysis": {
"summary": "Analysis of 42 error logs showing payment processing failures",
"key_insights": [
"Payment gateway returned 503 errors for 8 transactions",
"Retry mechanism activated in 67% of failed cases"
],
"errors": [
"PaymentGatewayError: Service temporarily unavailable (503)"
],
"function_calls": ["processPayment()", "retryTransaction()"],
"recommendations": [
"Implement circuit breaker for payment gateway",
"Add monitoring alerts for gateway health"
]
}
}
5. Cross-Index Correlation
Track a request across multiple indexes:
# Step 1: Check HTTP layer (envoy-edge)
curl -X POST http://localhost:8000/api/set_current_index \
-H "Content-Type: application/json" \
-d '{"index_pattern": "envoy-edge*"}'
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "x_session_id:abc123",
"max_results": 50
}'
# Step 2: Check backend processing (breeze-v2)
curl -X POST http://localhost:8000/api/set_current_index \
-H "Content-Type: application/json" \
-d '{"index_pattern": "breeze-v2*"}'
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{
"query_text": "abc123",
"max_results": 200,
"sort_order": "asc"
}'
🔧 Troubleshooting
Common Issues
1. Timestamp Field Errors
Problem: "No mapping found for [timestamp] in order to sort on"
Solution: The server uses timestamp field which works for all indexes. This error should not occur in v2.0.0.
If you see it:
curl -X POST http://localhost:8000/api/set_config \
-H "Content-Type: application/json" \
-d '{
"configs_to_set": {
"elasticsearch.timestamp_field": "@timestamp"
}
}'
2. Authentication Errors (401)
Problem: "Unauthorized" or "Invalid token"
Solution:
- Token expired - get a fresh token from browser
- Re-authenticate using
/api/set_auth_token
3. No Results Returned
Checklist:
- ✅ Is the correct index set?
- ✅ Is the time range correct?
- ✅ Try a broader query (
"*") - ✅ Check timezone offset
4. Slow Queries
Solutions:
- Reduce
max_results - Narrow time range
- Add specific query terms
- Check if caching is working (should be faster on repeated queries)
Testing
# Test Kibana connectivity
curl -X POST http://localhost:8000/api/search_logs \
-H "Content-Type: application/json" \
-d '{"query_text": "*", "max_results": 1}'
# Test Periscope connectivity
curl -X GET http://localhost:8000/api/get_periscope_streams
⚡ Performance Features
In-Memory Caching
Automatic caching reduces load on backend systems:
- Schema Cache: 1 hour TTL (Periscope stream schemas)
- Search Cache: 5 minutes TTL (recent queries)
Benefits:
- Faster repeated queries
- Reduced API calls
- Lower backend load
HTTP/2 Support
- Multiplexed connections
- Faster concurrent requests
- Better throughput for parallel queries
Connection Pooling
- Max connections: 200
- Keepalive connections: 50
- Efficient connection reuse
- Reduced latency
OpenTelemetry Tracing
- Distributed request tracing
- Performance monitoring
- Debug distributed issues
- Track request flow across components
🏗️ Architecture
Modular Structure
KIBANA_SERVER/
├── main.py # Server entry point
├── config.yaml # Configuration
├── requirements.txt # Dependencies
├── src/
│ ├── api/
│ │ ├── app.py # FastAPI application
│ │ └── http/
│ │ └── routes.py # API endpoints
│ ├── clients/
│ │ ├── kibana_client.py # Kibana API client
│ │ ├── periscope_client.py # Periscope API client
│ │ ├── http_manager.py # HTTP/2 + pooling
│ │ └── retry_manager.py # Retry logic
│ ├── services/
│ │ └── log_service.py # Business logic
│ ├── models/
│ │ ├── requests.py # Request models
│ │ └── responses.py # Response models
│ ├── utils/
│ │ └── cache.py # Caching utilities
│ ├── observability/
│ │ └── tracing.py # OpenTelemetry
│ ├── security/
│ │ └── sanitizers.py # Input validation
│ └── core/
│ ├── config.py # Configuration
│ ├── constants.py # Constants
│ └── logging_config.py # Logging
└── AI_rules.txt # Generic AI guide
Legacy vs Modular
| Feature | Legacy (v1.x) | Modular (v2.0) |
|---|---|---|
| Architecture | Monolithic | Modular |
| Caching | ❌ None | ✅ In-memory |
| HTTP | HTTP/1.1 | ✅ HTTP/2 |
| Tracing | ❌ None | ✅ OpenTelemetry |
| Connection Pool | ❌ Basic | ✅ Advanced |
| Timezone Support | ⚠️ Manual | ✅ Automatic |
| Config Management | ⚠️ Static | ✅ Dynamic |
| Error Handling | ⚠️ Basic | ✅ Comprehensive |
🤖 AI Integration
For AI Assistants
Use the provided AI_rules.txt for generic product documentation or AI_rules_file.txt for company-specific usage.
Key Requirements:
- ✅ Always authenticate first
- ✅ Discover and set index before searching
- ✅ Use
timestampfield for sorting - ✅ Include session_id in queries when tracking sessions
- ✅ Use ISO timestamps with timezone
Example AI Workflow
-
Authenticate:
POST /api/set_auth_token -
Discover Indexes:
GET /api/discover_indexes -
Set Index:
POST /api/set_current_index -
Search Logs:
POST /api/search_logs -
Analyze (Optional):
POST /api/summarize_logs
For complete AI integration instructions, refer to AI_rules.txt (generic) or AI_rules_file.txt (company-specific).
📚 Documentation
- AI_rules.txt - Generic product usage guide
- AI_rules_file.txt - Company-specific usage (internal)
- CONFIG_USAGE_ANALYSIS.md - Configuration reference (deleted, info in this README)
- KIBANA_INDEXES_COMPLETE_ANALYSIS.md - Index details (deleted, info in this README)
🔄 Migration from v1.x
If upgrading from v1.x:
- Update imports: Change from
kibana_mcp_server.pytomain.py - Update config: Remove unused parameters (see
config.yaml) - Update queries: Use
timestampfield instead of@timestamporstart_time - Test endpoints: All endpoints remain compatible
- Enjoy performance: Automatic caching and HTTP/2 benefits
📊 Performance Benchmarks
- Cache Hit Rate: ~80% for repeated queries
- Response Time: 30-50% faster with HTTP/2
- Connection Reuse: 90%+ with pooling
- Memory Usage: <200MB with full cache
🤝 Contributing
This is a proprietary project. For issues or feature requests, contact the maintainers.
📜 License
This project is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
This license requires that reusers:
- ✅ Give appropriate credit (Attribution)
- ❌ Do not use for commercial purposes (NonCommercial)
- ❌ Do not distribute modified versions (NoDerivatives)
For more information, see the LICENSE file.
Version: 2.0.0 (Modular)
Last Updated: October 2025
Total Logs: 1.3+ Billion
Indexes: 9 (7 active)
Status: Production Ready ✅
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。