VibeWatch
Monitors development commands and exposes terminal output to Claude in real-time, allowing AI assistants to see errors, logs, and stack traces without copy-pasting.
README
VibeWatch
Your AI pair programmer's eyes on your terminal
What is VibeWatch?
VibeWatch is a CLI tool that monitors your development commands and exposes terminal output to Claude (or any MCP-compatible AI assistant) in real-time. No more copy-pasting errors - just tell Claude "check my terminal" and it sees everything.
Quick Start
# Install globally
npm install -g vibewatch
# Or run directly with npx
npx vibewatch npm run dev
# Your dev server runs normally, but now Claude can see it!
Then ask Claude: "I'm getting an error, check my terminal" - Claude automatically sees your terminal output via MCP.
Installation
Option 1: Global Install
npm install -g vibewatch
vibewatch npm run dev
Option 2: npx (no install)
npx vibewatch npm run dev
Option 3: Local Development
git clone https://github.com/krjordan/vibewatch.git
cd vibewatch
npm install
npm run build
node dist/cli.js npm run dev
Usage
Basic Usage
Wrap any command with vibewatch:
# JavaScript/TypeScript
vibewatch npm run dev
vibewatch npx next dev
vibewatch yarn dev
vibewatch pnpm dev
# Python
vibewatch python manage.py runserver
vibewatch uvicorn main:app --reload
vibewatch pytest
# Any command
vibewatch cargo run
vibewatch go run main.go
CLI Options
vibewatch [options] <command>
Options:
-p, --port <number> API server port (default: 3333)
-b, --buffer-size <number> Log buffer size (default: 100)
-v, --verbose Include node_modules in stack traces
-r, --raw Disable noise filtering (keep all output)
-k, --keep-alive <seconds> Keep API alive after crash for MCP queries (default: 30)
-h, --help Display help
-V, --version Show version
Examples
# Next.js with custom port
vibewatch --port 4444 npx next dev
# Keep API alive for 60 seconds after crash
vibewatch --keep-alive 60 npm run dev
# Disable noise filtering for debugging
vibewatch --raw npm run build
# Larger buffer for long-running processes
vibewatch --buffer-size 500 npm test
MCP Integration
Claude Desktop / Cursor Configuration
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json) or Cursor MCP settings:
{
"mcpServers": {
"vibewatch": {
"command": "npx",
"args": ["-y", "-p", "vibewatch", "vibewatch-mcp"]
}
}
}
Or if installed globally (npm install -g vibewatch):
{
"mcpServers": {
"vibewatch": {
"command": "vibewatch-mcp"
}
}
}
Available MCP Tools
| Tool | Description |
|---|---|
get_terminal_output |
Get recent terminal output with optional filtering |
get_crash_context |
Get detailed crash information with stack traces |
get_recent_errors |
Quick view of recent errors only |
ping |
Test connectivity |
MCP Tool Parameters
get_terminal_output:
{
lines?: number, // Max lines to retrieve (default: 50, max: 100)
filter?: 'all' | 'errors' | 'warnings', // Filter output type
detail?: 'errors' | 'context' | 'full' // Progressive disclosure level
}
get_crash_context:
{
verbose?: boolean // Include node_modules/site-packages (default: false)
}
API Endpoints
VibeWatch exposes a local HTTP API for direct queries:
| Endpoint | Description |
|---|---|
GET /health |
Server status and buffer stats |
GET /live?lines=50&detail=context |
Recent output with progressive disclosure |
GET /crash |
Crash snapshot (if crashed) |
GET /errors |
Errors only |
GET /context?window=5 |
Errors with surrounding context |
Progressive Disclosure
Save tokens by requesting only what you need:
# Errors only (~200 tokens)
curl 'http://localhost:3333/live?detail=errors'
# Errors with context (~500 tokens)
curl 'http://localhost:3333/live?detail=context'
# Full output (~1000 tokens)
curl 'http://localhost:3333/live?detail=full'
Features
Framework Detection
VibeWatch automatically detects and optimizes for:
- JavaScript: Next.js, Vite, Webpack
- Python: Django, FastAPI, pytest
- General: Node.js, Python, Rust, Go
Smart Filtering
- ANSI code stripping
- Noise reduction (progress bars, HMR updates)
- Repeated line collapsing
- Stack trace filtering (node_modules/site-packages hidden by default)
Error Detection
- Language-aware error pattern matching
- Framework-specific error patterns
- Non-fatal error notifications
- Relevant file path extraction
Crash Handling
- Automatic crash detection
- Buffer snapshot preservation
- Keep-alive mode for post-crash MCP queries
- Exit code and signal reporting
Architecture
[Your Dev Command]
↓
[VibeWatch CLI] ─── wraps process, captures output
↓
[Circular Buffer] ─── last 100 lines, noise filtering
↓
[Fastify API] ─────── localhost:3333
↓
[MCP Server] ──────── stdio transport
↓
[Claude Desktop/Cursor/Any MCP Client]
Development
# Install dependencies
npm install
# Build
npm run build
# Development mode (watch)
npm run dev
# Type check
npm run typecheck
# Lint
npm run lint
Documentation
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick Contribution Guide
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Run tests:
npm test - Commit:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Open a Pull Request
Support
- GitHub Issues - Bug reports & feature requests
- Discussions - Questions & ideas
License
MIT - see LICENSE for details.
Built with love for the vibe coding community.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。