HOPX MCP Server
Enables AI assistants to execute Python, JavaScript, Bash, and Go code in blazing-fast (~0.1ms startup), isolated cloud containers with secure, ephemeral environments that auto-destroy after use.
README
HOPX MCP Server
Give your AI assistant superpowers with secure, isolated code execution.
The official Model Context Protocol (MCP) server for HOPX. Enable Claude and other AI assistants to execute code in blazing-fast (0.1s startup), isolated cloud containers.
Installation
uvx hopx-mcp
Get your API key at hopx.ai and configure your IDE below.
What This Enables
With this MCP server, your AI assistant can:
- ✅ Execute Python, JavaScript, Bash, and Go in isolated containers
- ✅ Analyze data with pandas, numpy, matplotlib (pre-installed)
- ✅ Test code snippets before you use them in production
- ✅ Process data securely without touching your local system
- ✅ Run system commands safely in isolated environments
- ✅ Install packages and test integrations on-the-fly
All executions happen in secure, ephemeral cloud containers that auto-destroy after use. Your local system stays clean and protected.
Configuration
Get Your API Key
Sign up at hopx.ai to get your free API key.
Configure Your IDE
After installing with uvx hopx-mcp, configure your IDE by adding the MCP server configuration:
Choose your IDE below for detailed configuration instructions:
<details> <summary><b>Cursor</b></summary>
Add to .cursor/mcp.json in your project or workspace:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
Replace your-api-key-here with your actual API key from hopx.ai.
</details>
<details> <summary><b>VS Code</b></summary>
Add to .vscode/mcp.json in your workspace:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
Replace your-api-key-here with your actual API key from hopx.ai.
</details>
<details> <summary><b>Visual Studio</b></summary>
Add to .mcp.json in your project root:
{
"mcpServers": {
"hopx-sandbox": {
"type": "stdio",
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
Replace your-api-key-here with your actual API key from hopx.ai.
</details>
<details> <summary><b>Claude Desktop</b></summary>
Add to ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
Replace your-api-key-here with your actual API key from hopx.ai, then restart Claude Desktop.
</details>
<details> <summary><b>Cline (VS Code Extension)</b></summary>
Add to your VS Code settings or Cline configuration:
{
"cline.mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
</details>
<details> <summary><b>Continue.dev</b></summary>
Add to ~/.continue/config.json:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
</details>
<details> <summary><b>Windsurf</b></summary>
Add to .windsurf/mcp.json in your project:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
</details>
<details> <summary><b>Zed</b></summary>
Add to your Zed settings or MCP configuration:
{
"mcp": {
"servers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
}
</details>
<details> <summary><b>Codex</b></summary>
Add to your Codex MCP configuration file:
{
"mcpServers": {
"hopx-sandbox": {
"command": "uvx",
"args": ["hopx-mcp"],
"env": {
"HOPX_API_KEY": "your-api-key-here"
}
}
}
}
</details>
Usage Examples
Your AI assistant can now execute code directly. Here's what it looks like:
Python Data Analysis
You: "Analyze this sales data: [100, 150, 200, 180, 220]"
Claude: Uses execute_code_isolated() to run:
import pandas as pd
import numpy as np
sales = [100, 150, 200, 180, 220]
df = pd.DataFrame({'sales': sales})
print(f"Mean: ${df['sales'].mean():.2f}")
print(f"Median: ${df['sales'].median():.2f}")
print(f"Growth: {((sales[-1] - sales[0]) / sales[0] * 100):.1f}%")
Output:
Mean: $170.00
Median: $180.00
Growth: 120.0%
JavaScript Computation
You: "Calculate fibonacci numbers up to 100"
Claude: Executes:
function fibonacci(max) {
const fib = [0, 1];
while (true) {
const next = fib[fib.length - 1] + fib[fib.length - 2];
if (next > max) break;
fib.push(next);
}
return fib;
}
console.log(fibonacci(100));
Output:
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
Bash System Info
You: "What's the system architecture and available disk space?"
Claude: Runs:
echo "System: $(uname -s)"
echo "Architecture: $(uname -m)"
echo "Disk:"
df -h / | tail -1
Output:
System: Linux
Architecture: x86_64
Disk:
/dev/root 20G 1.9G 17G 10% /
Features
🚀 Blazing Fast
- Sandbox creation in ~0.1s
- Pre-warmed containers ready to execute
- Global edge network for low latency
🔒 Secure by Default
- Complete isolation per execution
- No shared state between runs
- JWT-based authentication
- Auto-cleanup after timeout
🌐 Multi-Language Support
- Python 3.11+ with pandas, numpy, matplotlib, scikit-learn
- JavaScript/Node.js 20 with standard libraries
- Bash with common Unix utilities
- Go with compilation support
⚡ Pre-installed Packages
Python:
- Data Science: pandas, numpy, matplotlib, scipy, scikit-learn
- Web: requests, httpx, beautifulsoup4
- Jupyter: ipykernel, jupyter-client
JavaScript:
- Node.js 20.x runtime
- iJavaScript kernel for notebooks
System:
- git, curl, wget, vim, nano
- Build tools: gcc, g++, make
- Python package managers: pip, uv
🎯 Smart Defaults
- Internet access enabled
- 600-second auto-destroy (configurable)
- Request-specific environment variables
- Automatic error handling
API Reference
Primary Tool: execute_code_isolated()
The recommended way to execute code. Creates a sandbox, runs your code, returns output, and auto-destroys.
result = execute_code_isolated(
code='print("Hello, World!")',
language='python', # 'python', 'javascript', 'bash', 'go'
timeout=30, # max 300 seconds
env={'API_KEY': 'secret'}, # optional env vars
template_name='code-interpreter', # template to use
region='us-east' # optional: 'us-east', 'eu-west'
)
Returns:
{
'stdout': 'Hello, World!\n',
'stderr': '',
'exit_code': 0,
'execution_time': 0.123,
'sandbox_id': '1762778786mxaco6r2',
'_note': 'Sandbox will auto-destroy after 10 minutes'
}
Advanced: Persistent Sandboxes
For multi-step workflows where you need to maintain state:
# 1. Create a long-lived sandbox
sandbox = create_sandbox(
template_id='20', # or get ID from get_template('code-interpreter')
timeout_seconds=3600,
internet_access=True
)
# 2. Extract connection details
vm_url = sandbox['direct_url']
auth_token = sandbox['auth_token']
# 3. Run multiple commands
execute_code(vm_url, 'import pandas as pd', auth_token=auth_token)
execute_code(vm_url, 'df = pd.read_csv("data.csv")', auth_token=auth_token)
result = execute_code(vm_url, 'print(df.head())', auth_token=auth_token)
# 4. File operations
file_write(vm_url, '/workspace/output.txt', 'results', auth_token=auth_token)
content = file_read(vm_url, '/workspace/output.txt', auth_token=auth_token)
# 5. Clean up when done
delete_sandbox(sandbox['id'])
All Available Tools
The MCP server exposes 30+ tools for complete control:
Sandbox Management:
create_sandbox()- Create a new sandboxlist_sandboxes()- List all your sandboxesget_sandbox()- Get sandbox detailsdelete_sandbox()- Terminate a sandboxupdate_sandbox_timeout()- Extend runtime
Code Execution:
execute_code_isolated()- ⭐ Primary method (one-shot)execute_code()- Execute in existing sandboxexecute_code_rich()- Capture matplotlib plots, DataFramesexecute_code_background()- Long-running tasks (5-30 min)execute_code_async()- Very long tasks with webhooks (30+ min)
File Operations:
file_read(),file_write(),file_list()file_exists(),file_remove(),file_mkdir()
Process Management:
list_processes()- All system processesexecute_list_processes()- Background executionsexecute_kill_process()- Terminate process
Environment & System:
env_set(),env_get(),env_clear()- Manage env varsget_system_metrics()- CPU, memory, disk usagerun_command()- Execute shell commands
Templates:
list_templates()- Browse available templatesget_template()- Get template details
Architecture
┌─────────────┐
│ Claude │ Your AI Assistant
│ (MCP Host) │
└──────┬──────┘
│ MCP Protocol
│
┌──────▼──────┐
│ HOPX MCP │ This Server
│ Server │ (FastMCP)
└──────┬──────┘
│ HTTPS/REST
│
┌──────▼──────┐
│ HOPX Cloud │ Isolated Containers
│ Sandboxes │ • Python, JS, Bash, Go
└─────────────┘ • Auto-cleanup
• Global Edge Network
Environment Variables
Required
HOPX_API_KEY=your-api-key
Get your API key at hopx.ai
Optional
HOPX_BASE_URL=https://api.hopx.dev # default
HOPX_BEARER_TOKEN=alternative-auth # if using bearer token
Troubleshooting
"401 Unauthorized" Error
Cause: API key not set or invalid.
Solution:
# Verify your API key is set
echo $HOPX_API_KEY
# Or check your IDE config file
# See Configuration section for your IDE
"Template not found" Error
Cause: Invalid template name.
Solution: Use the default code-interpreter template or list available templates:
templates = list_templates(category='development', language='python')
Slow First Execution
Cause: Cold start - container is being created.
Why it happens: The first execution needs to:
- Create the container (~0.1ms)
- Wait for VM auth init (~3 seconds)
- Execute your code
Solution: Subsequent executions in the same sandbox are instant. For frequently-used environments, consider creating a persistent sandbox.
Execution Timeout
Cause: Code took longer than the timeout limit.
Solution: Increase timeout or use background execution:
# Increase timeout
execute_code_isolated(code='...', timeout=300) # max 300s
# Or use background for long tasks
proc = execute_code_background(vm_url, code='...', timeout=1800)
Limitations
- VM Initialization: ~3 second wait after sandbox creation for auth
- Execution Timeout: Maximum 300 seconds per synchronous execution
- Sandbox Lifetime: Default 10 minutes (configurable up to hours)
- Template-Specific: Some templates optimized for specific languages
Security
What's Protected
✅ Your local system - All code runs in isolated cloud containers ✅ Container isolation - Each execution in a separate container ✅ Network isolation - Containers can't access each other ✅ Automatic cleanup - Resources destroyed after timeout ✅ JWT authentication - Secure token-based auth per sandbox
What You Should Know
⚠️ Internet access - Containers can access the internet by default ⚠️ Code visibility - Your code is sent to HOPX cloud for execution ⚠️ Data handling - Follow your security policies for sensitive data
For sensitive workloads, contact us about private cloud deployments.
Support
- Documentation: docs.hopx.ai
- API Reference: api.hopx.dev
- Issues: GitHub Issues
- Email: support@hopx.ai
- Discord: Join our community
License
This MCP server is provided under the MIT License. See LICENSE for details.
See the HOPX Terms of Service for API usage terms.
Built With
- FastMCP - Python framework for MCP servers
- Model Context Protocol - Protocol for AI-tool integration
- HOPX Sandbox API - Cloud container platform
- uvx - Fast Python package installer and runner
Made with ❤️ by HOPX
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。