Keboola Explorer MCP Server
This server facilitates interaction with Keboola's Storage API, enabling users to browse and manage project buckets, tables, and components efficiently through Claude Desktop.
Tools
query_table
Executes an SQL SELECT query to get the data from the underlying snowflake database. * When constructing the SQL SELECT query make sure to use the fully qualified table names that include the database name, schema name and the table name. * The fully qualified table name can be found in the table information, use a tool to get the information about tables. The fully qualified table name can be found in the response for that tool. * Snowflake is case-sensitive so always wrap the column names in double quotes. Examples: * SQL queries must include the fully qualified table names including the database name, e.g.: SELECT * FROM "db_name"."db_schema_name"."table_name";
list_bucket_tables
List all tables in a specific bucket with their basic information.
get_table_metadata
Get detailed information about a specific table including its DB identifier and column information.
get_bucket_metadata
Get detailed information about a specific bucket.
list_bucket_info
List information about all buckets in the project.
list_components
List all available components and their configurations.
list_component_configs
List all configurations for a specific component.
README
Keboola MCP Server
<a href="https://glama.ai/mcp/servers/72mwt1x862"><img width="380" height="200" src="https://glama.ai/mcp/servers/72mwt1x862/badge" alt="Keboola Explorer Server MCP server" /></a>
A Model Context Protocol (MCP) server for interacting with Keboola Connection. This server provides tools for listing and accessing data from Keboola Storage API.
Requirements
- Python 3.10 or newer
- Keboola Storage API token
- Snowflake or BigQuery Read Only Workspace
Installation
Installing via Pip
First, create a virtual environment and then install the keboola_mcp_server package:
python3 -m venv --upgrade-deps .venv
source .venv/bin/activate
pip3 install keboola_mcp_server
Installing via Smithery
To install Keboola MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install keboola-mcp-server --client claude
Claude Desktop Setup
To use this server with Claude Desktop, follow these steps:
-
Create or edit the Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
-
Add the following configuration (adjust paths according to your setup):
{
"mcpServers": {
"keboola": {
"command": "/path/to/keboola-mcp-server/.venv/bin/python",
"args": [
"-m",
"keboola_mcp_server",
"--api-url",
"https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your-keboola-storage-token",
"KBC_WORKSPACE_SCHEMA": "your-workspace-schema"
}
}
}
}
Replace:
/path/to/keboola-mcp-serverwith your actual path to the cloned repositoryYOUR_REGIONwith your Keboola region (e.g.,north-europe.azure, etc.). You can remove it if your region is justconnectionexplicitlyyour-keboola-storage-tokenwith your Keboola Storage API tokenyour-workspace-schemawith your Snowflake schema or BigQuery dataset of your workspace
Note: If you are using a specific version of Python (e.g. 3.11 due to some package compatibility issues), you'll need to update the
commandinto using that specific version, e.g./path/to/keboola-mcp-server/.venv/bin/python3.11
Note: The Workspace can be created in your Keboola project. It is the same project where you got your Storage Token. The workspace will provide all the necessary connection parameters including the schema or dataset name.
- After updating the configuration:
- Completely quit Claude Desktop (don't just close the window)
- Restart Claude Desktop
- Look for the hammer icon in the bottom right corner, indicating the server is connected
Troubleshooting
If you encounter connection issues:
- Check the logs in Claude Desktop for any error messages
- Verify your Keboola Storage API token is correct
- Ensure all paths in the configuration are absolute paths
- Confirm the virtual environment is properly activated and all dependencies are installed
Cursor AI Setup
To use this server with Cursor AI, you have two options for configuring the transport method: Server-Sent Events (SSE) or Standard I/O (stdio).
-
Create or edit the Cursor AI configuration file:
- Location:
~/.cursor/mcp.json
- Location:
-
Add one of the following configurations (or all) based on your preferred transport method:
Option 1: Using Server-Sent Events (SSE)
{
"mcpServers": {
"keboola": {
"url": "http://localhost:8000/sse?storage_token=YOUR-KEBOOLA-STORAGE-TOKEN&workspace_schema=YOUR-WORKSPACE-SCHEMA"
}
}
}
Option 2a: Using Standard I/O (stdio)
{
"mcpServers": {
"keboola": {
"command": "/path/to/keboola-mcp-server/.venv/bin/python",
"args": [
"-m",
"keboola_mcp_server",
"--transport",
"stdio",
"--api-url",
"https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your-keboola-storage-token",
"KBC_WORKSPACE_SCHEMA": "your-workspace-schema"
}
}
}
}
Option 2b: Using WSL Standard I/O (wsl stdio)
When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this.
{
"mcpServers": {
"keboola": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"'source /wsl_path/to/keboola-mcp-server/.env",
"&&",
"/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'"
]
}
}
}
- where
/wsl_path/to/keboola-mcp-server/.envfile contains environment variables:
export KBC_STORAGE_TOKEN="your-keboola-storage-token"
export KBC_WORKSPACE_SCHEMA="your-workspace-schema"
Replace:
/path/to/keboola-mcp-serverwith your actual path to the cloned repositoryYOUR_REGIONwith your Keboola region (e.g.,north-europe.azure, etc.). You can remove it if your region is justconnectionexplicitlyyour-keboola-storage-tokenwith your Keboola Storage API tokenyour-workspace-schemawith your Snowflake schema or BigQuery dataset of your workspace
After updating the configuration:
- Restart Cursor AI
- If you use the
ssetransport make sure to start your MCP server. You can do so by running this in the activated virtual environment where you built the server:/path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server --transport sse --api-url https://connection.YOUR_REGION.keboola.com - Cursor AI should be automatically detect your MCP server and enable it.
BigQuery support
If your Keboola project uses BigQuery backend you will need to set GOOGLE_APPLICATION_CREDENTIALS environment variable
in addition to KBC_STORAGE_TOKEN and KBC_WORKSPACE_SCHEMA.
- Go to your Keboola BigQuery workspace and display its credentials (click
Connectbutton). - Download the credentials file to your local disk. It is a plain JSON file.
- Set the full path of the downloaded JSON credentials file to
GOOGLE_APPLICATION_CREDENTIALSenvironment variable.
This will give your MCP server instance permissions to access your BigQuery workspace in Google Cloud.
Available Tools
The server provides the following tools for interacting with Keboola Connection:
- List buckets and tables
- Get bucket and table information
- Preview table data
- Export table data to CSV
- List components and configurations
Development
Run tests:
pytest
Format code:
black .
isort .
Type checking:
mypy .
License
MIT License - see LICENSE file for details.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。