brain-mcp
An MCP server that provides persistent semantic memory backed by PostgreSQL and pgvector for storing and searching thoughts via vector embeddings. It enables dimensional organization, conflict detection, and historical tracking of facts, decisions, and observations.
README
brain-mcp
An MCP (Model Context Protocol) server that provides persistent semantic memory backed by PostgreSQL + pgvector. Store "thoughts" with vector embeddings and organize them via a flexible dimensional model.
Two Servers
This repo provides two MCP servers with a shared codebase:
- brain-mcp (
dist/index.js) — General-purpose knowledge store with ADR support. Use this for non-code contexts. - brain-code-mcp (
dist/code.js) — Superset of brain-mcp with code-aware tools. Use this for software projects.
You only need to configure one — brain-code-mcp includes all brain-mcp tools.
Features
- Semantic search — find thoughts by meaning using cosine similarity over vector embeddings
- Dimensional organization — tag thoughts with typed dimensions (person, project, topic, tag, file, symbol, etc.)
- Thought temporality — thoughts have types (
fact,decision,observation,question) and can be superseded while preserving history - Multi-brain support — isolated knowledge spaces via the
BRAIN_NAMEenvironment variable - Conflict detection — automatically surfaces similar existing thoughts when capturing new ones
- Architecture Decision Records — structured ADR capture with auto-numbering, alternatives, and consequences
- Code-linked knowledge — link thoughts to repositories, files, and symbols (brain-code-mcp)
- Knowledge freshness — detect stale knowledge when referenced code changes (brain-code-mcp)
Tools
Core tools (both servers)
| Tool | Description |
|---|---|
capture_thought |
Store a thought with type, dimensions, metadata, and embedding. Surfaces conflicts with similar active thoughts. |
search |
Semantic vector search with optional filters (dimension, thought type, etc.) |
list_recent |
Chronological listing with optional filters |
explore_dimension |
All thoughts linked to a given dimension |
list_dimensions |
All dimensions with thought counts |
list_brains |
List all brains with optional thought counts. Respects BRAIN_ACCESSIBLE. |
supersede_thought |
Replace an existing thought, preserving history. Auto-preserves ADR metadata. |
capture_adr |
Record an Architecture Decision Record with context, alternatives, and consequences |
list_adrs |
List and filter ADRs by status or dimension |
Code tools (brain-code-mcp only)
| Tool | Description |
|---|---|
capture_code_context |
Capture knowledge linked to specific files, symbols, or repositories |
search_code |
Semantic search filtered to code-linked knowledge |
check_freshness |
Check if code-linked knowledge is stale by comparing git state |
refresh_stale_knowledge |
Find stale thoughts with git diffs for review |
Core prompts (both servers)
| Prompt | Description |
|---|---|
brain_overview |
Comprehensive orientation: thought counts, dimensions, recent thoughts, ADR summary, open questions |
deep_dive |
Deep dive into a dimension with all linked thoughts, co-occurring dimensions, and ADRs |
decision_review |
Review active decisions and ADRs, flagging overdue revisit dates |
capture_session |
Set up a knowledge capture session with existing taxonomy and related knowledge |
Code prompts (brain-code-mcp only)
| Prompt | Description |
|---|---|
codebase_knowledge |
All knowledge about a repo grouped by file/symbol, with optional freshness checks |
file_context |
All knowledge about a specific file with freshness and semantically related unlinked knowledge |
Setup
Prerequisites
- Node.js
- PostgreSQL with pgvector extension
- An OpenRouter API key (for generating embeddings)
Quick start (Claude Code)
Set OPENROUTER_API_KEY in your shell environment (e.g. in ~/.bashrc or ~/.zshrc):
export OPENROUTER_API_KEY="your-key-here"
Then add to your project's .mcp.json:
{
"mcpServers": {
"brain": {
"command": "npx",
"args": ["-y", "github:markschaake/brain-mcp"],
"env": {
"DATABASE_URL": "postgresql://user:pass@host:5432/brain",
"BRAIN_NAME": "personal"
}
}
}
}
For brain-code-mcp (includes code-aware tools):
{
"mcpServers": {
"brain": {
"command": "npx",
"args": ["-y", "-p", "github:markschaake/brain-mcp", "brain-code-mcp"],
"env": {
"DATABASE_URL": "postgresql://user:pass@host:5432/brain",
"BRAIN_NAME": "my-project"
}
}
}
}
Note: Do not put
OPENROUTER_API_KEYin.mcp.json— it is often checked into version control. The server reads it from the environment automatically.
The database schema is automatically created on first run.
Database options
Option 1: Use the included docker-compose (easiest for local development)
git clone https://github.com/markschaake/brain-mcp.git
cd brain-mcp
docker compose up -d # starts PostgreSQL+pgvector on port 5488
With docker-compose, the default DATABASE_URL (postgresql://brain:brain@localhost:5488/brain) works without any configuration.
Option 2: Bring your own PostgreSQL
Any PostgreSQL instance with the pgvector extension installed will work. Set DATABASE_URL in your MCP config. The schema is auto-applied on first server startup.
Local development
pnpm install
pnpm run build
pnpm run dev # watch mode (tsc --watch)
pnpm run lint # run ESLint
# Run directly
OPENROUTER_API_KEY=your-key node dist/index.js # brain-mcp
OPENROUTER_API_KEY=your-key node dist/code.js # brain-code-mcp
Environment variables
| Variable | Description | Default |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | postgresql://brain:brain@localhost:5488/brain |
OPENROUTER_API_KEY |
Required for embedding generation | — |
EMBEDDING_MODEL |
Override the embedding model | openai/text-embedding-3-small |
BRAIN_NAME |
Which brain (knowledge space) to use | personal |
BRAIN_ACCESSIBLE |
Comma-separated whitelist of brain names this instance can access. Empty = all brains accessible. | (empty) |
Multi-brain usage
All tools and prompts accept an optional brain parameter to target a specific brain by name at runtime, without restarting the server. Omit it to use the default brain (BRAIN_NAME).
Read-only tools (search, list_recent, explore_dimension, list_dimensions, list_adrs, search_code, check_freshness, refresh_stale_knowledge) also accept brain: "*" to query across all accessible brains.
Write tools (capture_thought, supersede_thought, capture_adr, capture_code_context) reject "*" — you must specify a brain name for writes.
Use BRAIN_ACCESSIBLE to restrict which brains a server instance can access:
{
"mcpServers": {
"brain": {
"command": "npx",
"args": ["-y", "github:markschaake/brain-mcp"],
"env": {
"DATABASE_URL": "postgresql://user:pass@host:5432/brain",
"BRAIN_NAME": "personal",
"BRAIN_ACCESSIBLE": "personal,work,shared"
}
}
}
}
When BRAIN_ACCESSIBLE is empty (default), all brains are accessible.
Architecture
Source files
| File | Purpose |
|---|---|
src/index.ts |
brain-mcp entry point |
src/code.ts |
brain-code-mcp entry point (superset) |
src/tools.ts |
Shared tool registration (core + ADR tools) |
src/db.ts |
PostgreSQL connection pool and helpers |
src/migrate.ts |
Auto-migration runner (applies migrations/*.sql on startup) |
src/embeddings.ts |
Embedding generation via OpenRouter |
src/git.ts |
Git operations for freshness detection |
src/prompts.ts |
MCP prompt registration (core prompts for both servers) |
Database schema
Migrations are in migrations/ and are auto-applied on server startup.
- brains — isolated knowledge spaces
- thoughts — content + vector(1536) embedding + metadata (jsonb) + thought type + status
- dimensions — typed categories with metadata, unique per (brain, name, type)
- thought_dimensions — many-to-many links with optional context
Embeddings are indexed with HNSW for fast cosine similarity search.
Code-linked dimension types
brain-code-mcp uses these dimension types to link knowledge to code:
| Type | Name convention | Metadata |
|---|---|---|
repo |
Repository name | {} (extensible) |
file |
Repo-relative path | {repo, line_start, line_end, git_sha} |
symbol |
Symbol name | {repo, file, kind} |
ADR metadata
ADRs are stored as decision thoughts with structured metadata:
{
"adr": true,
"adr_number": 7,
"adr_title": "Use pgvector for semantic search",
"adr_status": "accepted", // proposed | accepted | deprecated | superseded
"adr_context": "Why this decision was needed...",
"adr_alternatives": [{ "name": "Pinecone", "pros": [...], "cons": [...] }],
"adr_consequences": ["Must run PostgreSQL with pgvector"],
"adr_decided_date": "2026-03-01"
}
License
ISC
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。