MCP StoneX UDP Genie

MCP StoneX UDP Genie

Enables AI assistants to interact with Databricks Genie Spaces through MCP tools. Provides secure OAuth-based access to query and interact with Databricks data catalogs and schemas via custom Genie interfaces.

Category
访问服务器

README

MCP StoneX UDP Genie

A simple, production-ready template for building Model Context Protocol (MCP) servers using FastMCP and FastAPI. This project demonstrates how to create custom tools that AI assistants can discover and invoke.

🚀 Quickstart Setup Guide

Follow these steps to deploy and connect your MCP StoneX UDP Genie App:


0. Update the Repository

  • Add or update tools as needed.
  • Note: You currently need to define a separate tool for each Genie.

1. Launch as a Databricks App

Run the following commands in your project directory:

databricks sync --watch . /Workspace/Users/{your_databricks_username}/mcp-stonex-udp-genie

databricks apps deploy mcp-stonex-udp-genie \
  --source-code-path /Workspace/Users/{your_databricks_username}/mcp-stonex-udp-genie

2. Assign App Permissions

For the auto-created App Service Principal, grant ALL of the following permissions in Databricks:

  • CAN USE on the App
  • CAN RUN on the Genie Space
  • USE CATALOG on the Genie Space's catalog target
  • USE SCHEMA on the Genie Space's schema target
  • SELECT on all tables in the Genie Space's schema target

3. Create a Databricks App Connection

  1. Go to Databricks Account Console
  2. Go to SettingsApp Connections.
  3. Click Add Connection.
  4. Set up as follows:
    • REDIRECT_URL:
      {databricks_app_url}/.auth/callback
    • CLIENT_SECRET:
      YES (copy & save for step 4)
    • OAUTH_ACCESS_SCOPE:
      all-apis

4. Configure an External UC Connection

Fill in the following values:

Field Value
CONNECTION_TYPE HTTP
AUTH_TYPE OAUTH Machine 2 Machine
HOST {databricks_app_url}
CLIENT_ID (from previous step)
CLIENT_SECRET (from previous step)
OAUTH_SCOPE all-apis
TOKEN_ENDPOINT {workspace_url}/oidc/v1/token
BASE_PATH /mcp
IS_MCP_CONNECTION True

5. Add to Playground or Supervisor

  • Register the External MCP Server in Playground or Multi-agent Supervisor for testing and multi-agent experiments.

Key Concepts

  • Tools: Callable functions that AI assistants can invoke (e.g., search databases, process data, call APIs)
  • Server: Exposes tools via the MCP protocol over HTTP
  • Client: Applications (like Claude, AI assistants) that discover and call tools

Features

  • ✅ FastMCP-based server with HTTP streaming support
  • ✅ FastAPI integration for additional REST endpoints
  • ✅ Example tools: health check and user information
  • ✅ Production-ready project structure
  • ✅ Ready for Databricks Apps deployment

Project Structure

mcp-server-hello-world/
├── server/
│   ├── app.py                    # FastAPI application and MCP server setup
│   ├── main.py                   # Entry point for running the server
│   ├── tools.py                  # MCP tool definitions
│   └── utils.py                  # Databricks authentication helpers
├── scripts/
│   └── dev/
│       ├── start_server.sh           # Start the MCP server locally
│       ├── query_remote.sh           # Interactive script for testing deployed app with OAuth
│       ├── query_remote.py           # Query MCP client (deployed app) with health and user auth
│       └── generate_oauth_token.py   # Generate OAuth tokens for Databricks
├── tests/
│   └── test_integration_server.py   # Integration tests for MCP server
├── pyproject.toml                # Project metadata and dependencies
├── requirements.txt              # Python dependencies (for pip)
├── app.yaml                      # Databricks Apps configuration
├── Claude.md                     # AI assistant context and documentation
└── README.md          

Prerequisites

  • Python 3.11 or higher
  • uv (recommended) or pip

Installation

Option 1: Using uv (Recommended)

# Install uv if you haven't already
# Install dependencies
uv sync

Option 2: Using pip

# Create a virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Running the Server

Development Mode

# Quick start with script (syncs dependencies and starts server)
./scripts/dev/start_server.sh

# Or manually using uv (default port 8000)
uv run mcp-stonex-udp-genie

# Or specify a custom port
uv run mcp-stonex-udp-genie --port 8080

# Or using the installed command (after pip install -e .)
mcp-stonex-udp-genie --port 3000

The server will start on http://localhost:8000 by default (or your specified port).

Accessing the Server

  • MCP Endpoints: http://localhost:8000/mcp
  • Available Tools:
    • health: Check server status
    • get_current_user: Get authenticated user information

Testing the MCP Server

This project includes test scripts to verify your MCP server is working correctly in both local and deployed environments.

Integration Tests

The project includes automated integration tests that validate the MCP server functionality:

# Run integration tests
uv run pytest tests/

What the tests do:

  • Automatically start the MCP server
  • Test that list_tools() works correctly
  • Test that all registered tools can be called without errors by invoking the call_tools()
  • Automatically clean up the server after tests complete

Manual Testing

End-to-end test your locally-running MCP server

./scripts/dev/start_server.sh
from databricks_mcp import DatabricksMCPClient
mcp_client = DatabricksMCPClient(
    server_url="http://localhost:8000"
)
# List available MCP tools
print(mcp_client.list_tools())

The script connects to your local MCP server without authentication and lists available tools.

End-to-end test your deployed MCP server

After deploying to Databricks Apps, use the interactive shell script to test with user-level OAuth authentication:

chmod +x scripts/dev/query_remote.sh
./scripts/dev/query_remote.sh

The script will guide you through:

  1. Profile selection: Choose your Databricks CLI profile
  2. App name: Enter your deployed app name
  3. Automatic configuration: Extracts app scopes and URLs automatically
  4. OAuth flow: Generates user OAuth token via browser
  5. End-to-end test: Tests list_tools(), and invokes each tool returned in list_tools

What it does:

  • Retrieves app configuration using databricks apps get
  • Extracts user authorization scopes from effective_user_api_scopes
  • Gets workspace host from your Databricks profile
  • Generates OAuth token with the correct scopes
  • Tests MCP client with user-level authentication
  • Verifies both the health check and get_current_user tool work correctly

This test simulates the real end-user experience when they authorize your app and use it with their credentials.

Alternatively, test manually with command-line arguments:

python scripts/dev/query_remote.py \
    --host "https://your-workspace.cloud.databricks.com" \
    --token "eyJr...Dkag" \
    --app-url "https://your-workspace.cloud.databricks.com/serving-endpoints/your-app"

The scripts/dev/query_remote.py script connects to your deployed MCP server with OAuth authentication and tests both the health check and user authorization functionality.

Adding New Tools

To add a new tool to your MCP server:

  1. Open server/tools.py
  2. Add a new function inside load_tools() with the @mcp_server.tool decorator:
@mcp_server.tool
def calculate_sum(a: int, b: int) -> dict:
    """
    Calculate the sum of two numbers.
    
    Args:
        a: First number
        b: Second number
    
    Returns:
        dict: Contains the sum result
    """
    return {"result": a + b}
  1. Restart the server - the new tool will be automatically available to clients

Tool Best Practices

  • Clear naming: Use descriptive, action-oriented names
  • Comprehensive docstrings: AI uses these to understand when to call your tool
  • Type hints: Help with validation and documentation
  • Structured returns: Return dicts or Pydantic models for consistent data
  • Error handling: Use try-except blocks and return error information

Connecting to Databricks

The utils.py module provides two helper methods for interacting with Databricks resources via the Databricks SDK Workspace Client:

When deployed as a Databricks App:

  • get_workspace_client() - Returns a client authenticated as the service principal associated with the app. See App Authorization for more details.
  • get_user_authenticated_workspace_client() - Returns a client authenticated as the end user with scopes specified by the app creator. See User Authorization for more details.

When running locally:

  • Both methods return a client authenticated as the current developer, since no service principal identity exists in the local environment.

Example usage in tools:

from server import utils

# Get current user information (user-authenticated)
w = utils.get_user_authenticated_workspace_client()
user = w.current_user.me()
display_name = user.display_name

See the get_current_user tool in server/tools.py for a complete example.

Generating OAuth Tokens

For advanced use cases, you can manually generate OAuth tokens for Databricks workspace access using the provided script. This implements the OAuth U2M (User-to-Machine) flow.

Generate Workspace-Level OAuth Token

python scripts/dev/generate_oauth_token.py \
    --host https://your-workspace.cloud.databricks.com \
    --scopes "all-apis offline_access"

Parameters:

  • --host: Databricks workspace URL (required)
  • --scopes: Space-separated OAuth scopes (default: all-apis offline_access)
  • --redirect-uri: Callback URI (default: http://localhost:8020)

Note: The script uses the databricks-cli OAuth client ID by default.

The script will:

  1. Generate a PKCE code verifier and challenge
  2. Open your browser for authorization
  3. Capture the authorization code via local HTTP server
  4. Exchange the code for an access token
  5. Display the token response as JSON (token is valid for 1 hour)

Example with custom scopes:

python scripts/dev/generate_oauth_token.py \
    --host https://your-workspace.cloud.databricks.com \
    --scopes "clusters:read jobs:write sql:read"

Configuration

Server Settings

The server can be configured using command-line arguments:

# Change port
uv run custom-mcp-server --port 8080

# Get help
uv run custom-mcp-server --help

The default configuration:

  • Host: 0.0.0.0 (listens on all network interfaces)
  • Port: 8000 (configurable via --port argument)

Deployment

Databricks Apps

This project is configured for Databricks Apps deployment:

  1. Deploy using Databricks CLI or UI
  2. The server will be accessible at your Databricks app URL

For more information refer to the documentation here

Try Your MCP Server in AI Playground

After deploying your MCP server to Databricks Apps, you can test it interactively in the Databricks AI Playground:

  1. Navigate to the AI Playground in your Databricks workspace
  2. Select a model with the Tools enabled label
  3. Click Tools > + Add tool and select your deployed MCP server
  4. Start chatting with the AI agent - it will automatically call your MCP server's tools as needed

The AI Playground provides a visual interface to prototype and test your MCP server with different models and configurations before integrating it into production applications.

For more information, see Prototype tool-calling agents in AI Playground.

Development

Code Formatting

# Format code with ruff
uv run ruff format .

# Check for lint errors
uv run ruff check .

Customization

Rename the Project

  1. Update name in pyproject.toml
  2. Update name parameter in server/app.py: FastMCP(name="your-name")
  3. Update the command script in pyproject.toml under [project.scripts]

Add Custom API Endpoints

Add routes to the app FastAPI instance in server/app.py:

@app.get("/custom-endpoint")
def custom_endpoint():
    return {"message": "Hello from custom endpoint"}

Troubleshooting

Port Already in Use

Change the port in server/main.py or set the PORT environment variable.

Import Errors

Ensure all dependencies are installed:

uv sync  # or pip install -r requirements.txt

Resources

AI Assistant Context

See Claude.md for detailed project context specifically designed for AI assistants working with this codebase.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选