Scout Monitoring MCP
Enables AI assistants to access Scout Monitoring performance and error data through Scout's API. Provides traces, errors, metrics, and insights for Rails, Django, FastAPI, Laravel and other applications to help identify and fix performance issues like N+1 queries, slow endpoints, and memory bloat.
README
Scout Monitoring MCP
This repository contains code to locally run an MCP server that can access Scout Monitoring data via Scout's API. We provide a Docker image that can be pulled and run by your AI Assistant to access Scout Monitoring data.
This puts Scout Monitoring's performance and error data directly in the hands of your AI Assistant. For Rails, Django, FastAPI, Laravel and more. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor and codebase. N+1 queries, slow endpoints, slow queries, memory bloat, throughput issues - all your favorite performance problems surfaced and explained right where you are working.
If this makes your life a tiny bit better, why not :star: it?!
Prerequisites
You will need to have or create a Scout Monitoring account and obtain an API key.
- Sign up
- Install the Scout Agent in your application and send Scout data!
- Visit settings to get or create an API key
- This is not your "Agent Key"; it's the "API Key" that can be created on the Settings page
- This is a read-only key that can only access data in your account
- Install Docker. Instructions below assume you can start a Docker container
The MCP server will not currently start without an API key set, either in the environment or by a command-line argument on startup.
Installation
We recommend using the provided Docker image to run the MCP server. It is intended to be started by your AI Assistant and configured with your Scout API key. Many local clients allow specifying a command to run the MCP server in some location. A few examples are provided below.
The Docker image is available on Docker Hub.
Of course, you can always clone this repo and run the MCP server directly; uv or other
environment management tools are recommended.
Setup Wizard
The simplest way to configure and start using the Scout MCP is with our interactive setup wizard:
Run via npx:
npx @scout_apm/wizard
Build and run from source:
cd ./wizard
npm install
npm run build
node dist/wizard.js
The wizard will guide you through:
- Selecting your AI coding platform (Cursor, Claude Code, Claude Desktop)
- Entering your Scout API key
- Automatically configuring the MCP server settings
Supported Platforms
The wizard currently supports setup for:
- Cursor - Automatically configures MCP settings
- Claude Code (CLI) - Provides the correct command to run
- Claude Desktop - Updates the configuration file for Windows/Mac
Configure a local Client (e.g. Claude/Cursor/VS Code Copilot)
If you would like to configure the MCP manually, this usually just means supplying a command to run the MCP server with your API key in the environment to your AI Assistant's config. Here is the shape of the JSON (the top-level key varies):
{
"mcpServers": {
"scout-apm": {
"command": "docker",
"args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"],
"env": { "SCOUT_API_KEY": "your_scout_api_key_here"}
}
}
}
<details> <summary> Claude Code</summary>
claude mcp add scoutmcp -e SCOUT_API_KEY=your_scout_api_key_here -- docker run --rm -i -e SCOUT_API_KEY scoutapp/scout-mcp-local
</details>
<details> <summary>Cursor</summary>
MAKE SURE to update the SCOUT_API_KEY value to your actual api key in
Arguments in the Cursor Settings > MCP
</details>
<details> <summary>VS Code Copilot</summary>
- VS Code Copilot docs
- We recommend the "Add an MCP server to your workspace" option </details>
<details> <summary>Claude Desktop</summary>
Add the following to your claude config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"scout-apm": {
"command": "docker",
"args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"],
"env": { "SCOUT_API_KEY": "your_scout_api_key_here"}
}
}
}
</details>
Token Usage
We are currently more interested in expanding available information than strictly
controlling response size from our MCP tools. If your AI Assistant has a configurable
token limit (e.g. Claude Code export MAX_MCP_OUTPUT_TOKENS=50000), we recommend
setting it generously high, e.g. 50,000 tokens.
Usage
Scout's MCP is intended to put error and performance data directly in the... hands? of your AI Assistant. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor.
Most assistants will show you both raw tool calls and perform analysis. Desktop assistants can readily create custom JS applications to explore whatever data you desire. Assistants integrated into code editors can use trace data and error backtraces to make fixes right in your codebase.
Combine Scout's MCP with your AI Assistant's other tools to:
- Create rich GitHub/GitLab issues based on errors and performance data
- Make JIRA fun - have your AI Assistant create tickets with all the details
- Generate PRs that fix specific errors and performance problems
Tool
The Scout MCP provides the following tools for accessing Scout APM data:
list_apps- List available Scout APM applications, with optional filtering by last active dateget_app_metrics- Get individual metric data (response_time, throughput, etc.) for a specific applicationget_app_endpoints- Get all endpoints for an application with aggregated performance metricsget_endpoint_metrics- Get timeseries metrics for a specific endpoint in an applicationget_app_endpoint_traces- Get recent traces for an app filtered to a specific endpointget_app_trace- Get an individual trace with all spans and detailed execution informationget_app_error_groups- Get recent error groups for an app, optionally filtered by endpointget_app_insights- Get performance insights including N+1 queries, memory bloat, and slow queries
Useful Prompts
- "Summarize the available tools in the Scout Monitoring MCP."
- "Find the slowest endpoints for app
my-app-namein the last 7 days. Generate a table with the results including the average response time, throughput, and P95 response time." - "Show me the highest-frequency errors for app
Fooin the last 24 hours. Get the latest error detail, examine the backtrace and suggest a fix." - "Get any recent n+1 insights for app
Bar. Pull the specific trace by id and help me optimize it based on the backtrace data."
Local Development
We use uv and taskipy to manage environments and run tasks for this project.
Run with Inspector
uv run task dev
Connect within inspector to add API key, set to STDIO transport
Build the Docker image
docker build -t scout-mcp-local .
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。