detrix

detrix

Detrix is an agentic debugger. Watch any variable at any line, zero code changes during debugging Local or cloud - same workflow for Docker containers and remote hosts Python, Go, Rust - observation points capture values without pausing, without restarting

Category
访问服务器

README

<table style="border:none;"> <tr> <td width="300"> <img alt="Detrix" src="assets/logo.png" width="300"> </td> <td>

Detrix

Give your AI agent eyes inside any running program.

  • Watch any variable at any line — agent sets an observation point in seconds, zero code changes during debugging
  • Local or cloud — same workflow for Docker containers and remote hosts
  • Python, Go, Rust — observation points capture values without pausing, without restarting
  • Built for agents — observe, query, manage observations via natural language; Claude Code, Cursor, Windsurf

</td> </tr> </table>

Tests CI Rust Docker crates.io PyPI Go License


See It in Action

<video src="https://github.com/user-attachments/assets/2f6cc317-e09b-48ae-a098-d553d59a26e4" controls width="100%"></video>

1-minute demo: A Go service in Docker starts reporting wrong numbers. Nothing in the client code changed. Claude watches it live and finds the cause.

Here's what the session looks like:

You:    "The order-service in Docker is reporting wildly inflated revenue.
         Daily total is in the thousands when transactions should be $20-200."

Agent:  → reads the source, spots dead unit-conversion code
        → wakes the service via Detrix
        → observes txn.Amount, txn.Unit, total
          (no line number — find_variable locates it in the running container)

        "Found it. The pricing API changed what it's returning.

        txn.Amount = 8957, txn.Unit = "cents" → should be $89.57, summed as $8957

        In calculateRevenue, the unit field is fetched and discarded:

        unit := txn.Unit
        _ = unit          // ← conversion was intended, never implemented
        total += amount   // ← adds cents as if they were dollars

        This didn't matter before. It matters now."

        → fixed the client code
        → reported the API contract change upstream

No code was modified to instrument. No restarts. The old workflow — add a log line, rebuild, redeploy, wait for the bug to reproduce — replaced by watching it live.

You don't need to know the line number either — describe the behavior and the agent finds where to look.


Why Detrix?

You hit a bug. The old workflow: add a print, restart, reproduce, remove the print, repeat. If it's in production, redeploy. If it's in a Docker container, get into the container. If it's intermittent, wait.

With Detrix, you just ask the agent. It finds the right line, plants an observation point, and tells you what it sees — live, nothing restarting.

That bug that cost you hours last week — redeploy after redeploy, still can't reproduce — your agent can investigate it in minutes, while your app keeps running.

print() / logging Detrix
Iteration speed Hours (edit → rebuild → deploy) Minutes
Add new observation Edit code → restart Ask the agent — no code, no restart¹
Production-safe Output pollution, perf risk Non-breaking observation points
Events Ephemeral stream Stored, queryable by metric and time
Capture control Every hit, no filtering Throttle, sample, first-hit, interval
Cleanup Manual (easy to forget, ships to prod) One command — or automatic expiry
Sensitive data Secrets can leak via log output Sensitive-named vars blocked by default; configurable blacklist + whitelist in detrix.toml

¹ Embed detrix.init() once for zero restarts forever. Or restart once to attach the debugger (--debugpy, dlv, lldb-dap) — from that point on, the agent adds and removes observations without any further restarts.


Quick Start

Try it in 2 minutes. Your agent handles everything after step 3.

1. Install Detrix

macOS (Homebrew):

brew install flashus/tap/detrix

macOS / Linux (shell script):

curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/flashus/detrix/releases/latest/download/detrix-installer.sh | sh

Windows (PowerShell):

irm https://github.com/flashus/detrix/releases/latest/download/detrix-installer.ps1 | iex

Docker (linux/amd64, linux/arm64):

docker pull ghcr.io/flashus/detrix:latest

Build from source:

cargo install --git https://github.com/flashus/detrix detrix

Then initialise (creates config and sets up local storage):

detrix init

2. Add to your app

One line — the debugger sleeps until your agent needs it, zero overhead when idle:

import detrix
detrix.init(name="my-app")

Go and Rust work the same way — see App Integration.

3. Connect your agent

Claude Code:

claude mcp add --scope user detrix -- detrix mcp

Cursor / Windsurf — add to .mcp.json in your project root:

{
  "mcpServers": {
    "detrix": {
      "command": "detrix",
      "args": ["mcp"]
    }
  }
}

For cloud setup and other editors, see the setup guide.

That's it. Ask your agent to observe any line in your running app — no restarts, nothing ships to prod.


Alternative: connect without embedding

Don't want to add a dependency? Start your app directly under a debugger instead:

# Python
python -m debugpy --listen 127.0.0.1:5678 app.py

# Go
dlv debug --headless --listen=127.0.0.1:5678 --api-version=2 main.go

# Rust
lldb-dap --port 5678

Listens on 127.0.0.1 — local only. See the language setup guide for remote and Docker.


How It Works

Detrix is a daemon that runs locally or in the cloud and connects your AI agent to any running process via 29 MCP tools. Under the hood, it talks to your app's debugger via the Debug Adapter Protocol (DAP). It sets logpoints — breakpoints that evaluate an expression and log the result instead of pausing. Your application runs at full speed; Detrix captures the values.

  AI Agent                 Detrix Daemon              Debugger (DAP)         Your App
  (Claude Code, Cursor,    (local or Docker/cloud)    debugpy / dlv /        (Python/Go/Rust,
    Windsurf, local)                                  lldb-dap               local/cloud)
      │                         │                          │                      │
      │── "observe line 127" ──▶│                          │                      │
      │                         │── set logpoint ─────────▶│                      │
      │                         │                          │── captures value ───▶│
      │                         │◀────────────── captured values ─────────────────│
      │◀── structured events ───│                          │                      │
      │                         │                          │                      │
      │         App never pauses. No code changes. No restarts.                   │

The daemon runs locally or alongside your service in Docker — same protocol either way. In cloud mode, source files are fetched automatically so the agent can find the right lines without them on your machine. See the Installation Guide for cloud setup.


App Integration

import detrix
detrix.init(name="my-app")   # That's it. Agent controls the rest.
Language Install Docs
Python pip install detrix-py Python Client
Go go get github.com/flashus/detrix/clients/go Go Client
Rust detrix-rs = "1.1.1" in Cargo.toml Rust Client

Production pattern: Build one service instance with debug symbols and a Detrix client. Route suspect traffic to it via Kafka, a sidecar, or your load balancer. The rest of your fleet runs unaffected — full-speed, no instrumentation overhead. You get deep observability on one instance without touching production.

See the Clients Manual for full documentation.


Features

No code changes. The agent instruments your running code via observation points — nothing gets committed, nothing ships to prod.

No pausing. Observation points evaluate expressions at full execution speed, with no breakpoint-style halting. For high-frequency code paths, use sample or throttle modes to control event volume.

No forgotten cleanup. Metrics expire automatically via TTL, or remove everything with one command.

Agent tools 29 MCP tools — observe any line, query events, enable/disable observation groups, and clean up; no line number needed
Zero-downtime instrumentation Add metrics without restarting your app
Multi-variable capture Capture multiple variables per observation point
Capture modes Stream, sample, throttle, first-hit, periodic sampling (every N sec)
Runtime introspection Stack traces, memory snapshots, variable inspection, expression evaluation
Multi-language Python (debugpy), Go (delve), Rust (lldb-dap)
Cloud debugging Observe Docker containers and remote hosts — no VPN, no port forwarding
Durable storage Events stored in SQLite on the daemon host. Run Detrix on a remote server, connect your agent in the morning and ask what happened overnight. Daemon auto-reconnects to the debug adapter if it restarts.
Extensible New frontends via open API; new language support by implementing a language adapter — Adding Languages
Safety validation Sensitive variable names (password, api_key, token, secret, private_key, etc.) blocked before capture. Configurable blacklist + whitelist for variable names and functions in detrix.toml. Enable safe mode per connection to allow only variable watching — no expression execution, no stack traces, no memory snapshots. Blocked operations return a clear named error so the agent can explain the constraint.
Auth Bearer token auth (static or JWT/JWKS) — designed to run behind your reverse proxy
Event streaming Forward captured events to Graylog
4 API protocols MCP (stdio), gRPC, REST, WebSocket

Documentation

Installation Guide Install, language setup, agent config, cloud debugging
CLI Reference Command-line interface
Clients Manual Python, Go, Rust client libraries
Architecture Clean Architecture with 13 Rust crates
Adding Languages Extend Detrix to new languages

Contributing

cargo fmt --all && cargo clippy --all -- -D warnings && cargo test --all
  1. Fork the repository
  2. Create a feature branch
  3. Run the checks above
  4. Submit a Pull Request

License

MIT License — see LICENSE.

Found a bug? Open an issue. Found in minutes what took you days? Tell us in Discussions.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选