Mind Keg MCP
A persistent memory server that stores and retrieves atomic coding insights like architectural decisions and debugging patterns for AI agents. It enables agents to maintain institutional knowledge across sessions using semantic search and local SQLite storage.
README
Mind Keg MCP
A persistent memory MCP server for AI coding agents. Stores atomic learnings — debugging insights, architectural decisions, codebase conventions — so every agent session starts with relevant institutional knowledge.
Problem
AI coding agents (Claude Code, Cursor, Windsurf) lose context between sessions. Hard-won insights are forgotten the moment a conversation ends. Developers repeatedly re-explain the same things; agents repeatedly make the same mistakes.
Mind Keg solves this with a centralized, persistent brain that any MCP-compatible agent can query and contribute to.
How It Works
Mind Keg implements a RAG (Retrieval-Augmented Generation) pattern for AI coding agents:
- Retrieval — Agent searches the brain for relevant learnings using semantic or keyword search
- Augmentation — Retrieved learnings are injected into the agent's conversation context
- Generation — The agent responds with awareness of past discoveries and decisions
Unlike traditional RAG systems that chunk large documents, Mind Keg stores pre-curated atomic learnings (max 500 chars each). No chunking strategy needed — each learning IS the retrieval unit. The agent controls both retrieval and storage, creating a feedback loop where knowledge improves over time.
Features
- Store and retrieve atomic learnings (max 500 chars, one insight per entry)
- Semantic search with three provider options:
- FastEmbed (free, local, ONNX-based —
BAAI/bge-small-en-v1.5, 384 dims) - OpenAI (paid, best quality —
text-embedding-3-small, 1536 dims) - None (FTS5 keyword fallback — zero external dependencies)
- FastEmbed (free, local, ONNX-based —
- Six categories:
architecture,conventions,debugging,gotchas,dependencies,decisions - Free-form tags and group linking
- Three scoping levels: repository-specific, workspace-wide, and global learnings
- Dual transport: stdio (local) + HTTP+SSE (remote)
- API key authentication with per-repository access control
- SQLite storage (zero dependencies, zero config)
- Import/export for backup and migration
Quick Start
Install
npm install -g mindkeg-mcp
Create an API key
mindkeg api-key create --name "My Laptop"
# Displays the key ONCE — save it securely
# mk_abc123...
Connect your AI agent
Mind Keg works with any MCP-compatible AI coding agent. Choose your setup:
Claude Code (stdio)
Add to ~/.claude.json or your project's MCP settings:
{
"mcpServers": {
"mindkeg": {
"command": "mindkeg",
"args": ["serve", "--stdio"],
"env": {
"MINDKEG_API_KEY": "mk_your_key_here"
}
}
}
}
Cursor
Add to your Cursor MCP settings (.cursor/mcp.json or global settings):
{
"mcpServers": {
"mindkeg": {
"command": "mindkeg",
"args": ["serve", "--stdio"],
"env": {
"MINDKEG_API_KEY": "mk_your_key_here"
}
}
}
}
Windsurf
Add to your Windsurf MCP configuration (~/.codeium/windsurf/mcp_config.json):
{
"mcpServers": {
"mindkeg": {
"command": "mindkeg",
"args": ["serve", "--stdio"],
"env": {
"MINDKEG_API_KEY": "mk_your_key_here"
}
}
}
}
HTTP mode (any MCP client)
For agents that connect via HTTP instead of stdio:
MINDKEG_API_KEY=mk_your_key mindkeg serve --http
# Listening on http://127.0.0.1:52100/mcp
{
"mcpServers": {
"mindkeg": {
"type": "http",
"url": "http://127.0.0.1:52100/mcp",
"headers": {
"Authorization": "Bearer mk_your_key_here"
}
}
}
}
Other MCP-compatible agents
Mind Keg works with any agent that supports the Model Context Protocol — including Codex CLI, Gemini CLI, GitHub Copilot, and more. Use the stdio config above adapted to your agent's MCP settings format.
Add Mind Keg instructions to your repository
Copy templates/AGENTS.md to the root of any repository where you want agents to use Mind Keg.
AGENTS.md is the industry standard supported by 20+ AI tools (Cursor, Windsurf, Codex, Gemini CLI, GitHub Copilot, etc.).
Claude Code only: Claude Code doesn't auto-load
AGENTS.mdnatively. Add@AGENTS.mdto yourCLAUDE.mdto bridge it.
MCP Tools
| Tool | Description |
|---|---|
store_learning |
Store a new atomic learning (repo, workspace, or global scope) |
search_learnings |
Semantic/keyword search for relevant learnings |
update_learning |
Update content, category, or tags |
deprecate_learning |
Mark a learning as deprecated |
flag_stale |
Flag a learning as potentially outdated |
delete_learning |
Permanently delete a learning |
list_repositories |
List all repositories with learning counts |
list_workspaces |
List all workspaces with learning counts |
CLI Commands
# Start in stdio mode (for local agent connections)
mindkeg serve --stdio
# Start in HTTP mode (for remote connections)
mindkeg serve --http
# API key management
mindkeg api-key create --name "My Key"
mindkeg api-key create --name "Team Key" --repositories /repo/a /repo/b
mindkeg api-key list
mindkeg api-key revoke <prefix>
# Database
mindkeg migrate
# Backup and restore
mindkeg export --output backup.json
mindkeg import backup.json --regenerate-embeddings
Configuration
| Environment Variable | Default | Description |
|---|---|---|
MINDKEG_SQLITE_PATH |
~/.mindkeg/brain.db |
SQLite database file |
MINDKEG_EMBEDDING_PROVIDER |
fastembed |
fastembed, openai, or none |
OPENAI_API_KEY |
(none) | OpenAI API key (when provider=openai) |
MINDKEG_HOST |
127.0.0.1 |
HTTP server bind address |
MINDKEG_PORT |
52100 |
HTTP server port |
MINDKEG_LOG_LEVEL |
info |
debug, info, warn, error |
MINDKEG_API_KEY |
(none) | API key for stdio transport |
Embedding providers
FastEmbed (default, free, local)
Semantic search works out of the box using FastEmbed — no API key needed, no network calls. Uses BAAI/bge-small-en-v1.5 (384 dimensions) via local ONNX Runtime. Model files are downloaded once on first use (~50MB).
OpenAI (paid, best quality)
export MINDKEG_EMBEDDING_PROVIDER=openai
export OPENAI_API_KEY=sk-...
Uses text-embedding-3-small (1536 dimensions). Best semantic search quality but requires an API key and incurs per-request costs.
None (keyword search only)
export MINDKEG_EMBEDDING_PROVIDER=none
Disables semantic search and falls back to SQLite FTS5 full-text search — all other features work identically.
Data Model
Each learning contains:
| Field | Type | Notes |
|---|---|---|
id |
UUID | Auto-generated |
content |
string (max 500) | The atomic learning text |
category |
enum | One of 6 categories |
tags |
string[] | Free-form labels |
repository |
string or null | Repo path; null = workspace or global |
workspace |
string or null | Workspace path; null = repo-specific or global |
group_id |
UUID or null | Link related learnings |
source |
string | Who created this (e.g., "claude-code") |
status |
enum | active or deprecated |
stale_flag |
boolean | Agent-flagged as potentially outdated |
created_at |
ISO 8601 | Auto-set on creation |
updated_at |
ISO 8601 | Auto-updated on modification |
Scoping
Learnings have three scope levels:
| Scope | repository |
workspace |
Visible where |
|---|---|---|---|
| Repo-specific | set | null | Only that repo |
| Workspace-wide | null | set | All repos in the same parent folder |
| Global | null | null | Everywhere |
Workspaces are auto-detected from the parent folder of a repository path. For example, if your repos are organized as:
repositories/
personal/ ← workspace
app-a/
app-b/
work/ ← workspace
project-x/
A workspace learning stored under repositories/personal/ is shared across app-a and app-b but not project-x.
When searching, results include all three scopes: repo-specific + workspace + global. Each result has a scope field indicating its level.
What Makes a Good Learning?
- Atomic: One insight per entry. Max 500 characters.
- Actionable: What to DO or AVOID, not just what exists.
- Specific: Mentions the concrete context (library, pattern, file).
Good: "Always wrap Prisma queries in try/catch — it throws on constraint violations, not returns null."
Bad: "Be careful with the database." (too vague)
Development
# Clone and install
git clone ...
npm install
# Run tests
npm test
# Build
npm run build
# Development mode (rebuilds on change)
npm run dev
# Type check
npm run typecheck
Running without external APIs
Mind Keg works fully offline by default. FastEmbed provides free, local semantic search using ONNX Runtime — no API keys or network calls required. All CRUD operations and search work out of the box.
Architecture
CLI (Commander.js)
└── serve / api-key / migrate / export / import
src/
index.ts Entry point, stdio + HTTP transports
server.ts MCP server + tool registration
config.ts Config loading (env vars → defaults)
auth/ API key generation + validation middleware
tools/ One file per MCP tool (8 tools)
services/ LearningService + EmbeddingService
storage/ StorageAdapter interface + SQLite impl
models/ Zod schemas + TypeScript types
utils/ Logger (pino → stderr) + error classes
templates/
AGENTS.md Template for instructing agents to use Mind Keg
See CLAUDE.md for detailed development conventions.
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。
mcp-server-qdrant
这个仓库展示了如何为向量搜索引擎 Qdrant 创建一个 MCP (Managed Control Plane) 服务器的示例。