ogham-mcp
Persistent shared memory for AI agents. Hybrid search (pgvector + tsvector), knowledge graph, cognitive scoring, and 16-language temporal extraction. 97.2% Recall@10 on LongMemEval with one PostgreSQL query. Works across Claude Code, Cursor, Codex, OpenClaw, and any MCP client.
README
Ogham MCP
Ogham (pronounced "OH-um") -- persistent, searchable shared memory for AI coding agents. Works across clients.
Contents
- Retrieval quality -- 97.2% R@10 on LongMemEval
- The problem
- Quick start
- Installation methods -- Claude Code, OpenCode, Docker, source
- SSE transport -- multi-agent setup
- CLI -- command-line interface
- Configuration -- env vars, embedding providers, temporal search, lifecycle hooks
- MCP tools -- memory, search, graph, profiles, import/export
- Skills -- ogham-research, ogham-recall, ogham-maintain
- Scoring and condensing
- Database setup -- Supabase, Neon, vanilla Postgres
- Architecture
Retrieval quality
97.2% Recall@10 on LongMemEval (500 questions, ICLR 2025). No LLM in the search pipeline -- one PostgreSQL query, no neural rerankers, no knowledge graph.
End-to-end QA accuracy on LongMemEval (retrieval + LLM reads and answers):
| System | Accuracy | Architecture |
|---|---|---|
| OMEGA | 95.4% | Classification + extraction pipeline |
| Observational Memory (Mastra) | 94.9% | Observation extraction + GPT-5-mini |
| Hindsight (Vectorize) | 91.4% | 4 memory types + Gemini-3 |
| Zep (Graphiti) | 71.2% | Temporal knowledge graph + GPT-4o |
| Mem0 | 49.0% | RAG-based |
Retrieval only (R@10 -- no LLM in the search loop):
| System | R@10 | Architecture |
|---|---|---|
| Ogham | 97.2% | 1 SQL query (pgvector + tsvector CCF hybrid search) |
| LongMemEval paper baseline | 78.4% | Session decomposition + fact-augmented keys |
Other retrieval systems that report similar R@10 numbers typically use cross-encoder reranking, NLI verification, knowledge graph enrichment, and LLM-as-a-judge pipelines. Ogham reaches 97.2% with one Postgres query.
These tables measure different things. QA accuracy tests whether the full system (retrieval + LLM) produces the correct answer. R@10 tests whether retrieval alone finds the right memories. Ogham is a retrieval engine -- it finds the memories, your LLM reads them.
| Category | R@10 | Questions |
|---|---|---|
| single-session-assistant | 100% | 56 |
| knowledge-update | 100% | 78 |
| single-session-user | 98.6% | 70 |
| multi-session | 97.3% | 133 |
| single-session-preference | 96.7% | 30 |
| temporal-reasoning | 93.5% | 133 |
Full breakdown: ogham-mcp.dev/features
The problem
AI coding agents forget everything between sessions. Switch from Claude Code to Cursor to Kiro to OpenCode and context is lost. Decisions, gotchas, architectural patterns -- gone. You end up repeating yourself, re-explaining your codebase, re-debugging the same issues.
Ogham gives your agents a shared memory that persists across sessions and clients.
Quick start
1. Install
uvx ogham-mcp init
This runs the setup wizard. It walks you through everything: database connection, embedding provider, schema migration, and writes MCP client configs for Claude Code, Cursor, VS Code, and others.
You need a database before running this. Either create a free Supabase project or a Neon database. The wizard handles the rest.
Using Neon or self-hosted Postgres? Install with the postgres extra so the driver is available:
uvx --from 'ogham-mcp[postgres]' ogham-mcp init
2. Add to your MCP client
The wizard configures everything and writes your client config -- including all environment variables the server needs. For Claude Code, it runs claude mcp add automatically. For other clients, copy the config snippet it prints.
3. Use it
Tell your agent to remember something, then ask about it later -- from the same client or a different one. It works because they all share the same database.
Manual setup (if you prefer)
If you'd rather configure things yourself instead of using the wizard:
# Supabase
export SUPABASE_URL=https://your-project.supabase.co
export SUPABASE_KEY=your-service-role-key
export EMBEDDING_PROVIDER=openai # or ollama, mistral, voyage
export OPENAI_API_KEY=sk-... # for your chosen provider
# Or Postgres (Neon, self-hosted)
export DATABASE_BACKEND=postgres
export DATABASE_URL=postgresql://user:pass@host/db
export EMBEDDING_PROVIDER=openai
export OPENAI_API_KEY=sk-...
Run the schema migration (sql/schema.sql for Supabase, sql/schema_postgres.sql for Neon/self-hosted), then add the MCP server to your client.
Installation methods
| Method | Command | When to use |
|---|---|---|
| uvx (recommended) | uvx ogham-mcp |
Quick setup, auto-updates |
| Docker | docker pull ghcr.io/ogham-mcp/ogham-mcp |
Isolation, self-hosted |
| Git clone | git clone + uv sync |
Development, contributions |
Claude Code
claude mcp add ogham -- uvx ogham-mcp
OpenCode
Add to ~/.config/opencode/opencode.json:
{
"mcp": {
"ogham": {
"type": "local",
"command": ["uvx", "ogham-mcp"],
"environment": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_KEY": "{env:SUPABASE_KEY}",
"EMBEDDING_PROVIDER": "openai",
"OPENAI_API_KEY": "{env:OPENAI_API_KEY}"
}
}
}
}
Docker
docker run --rm \
-e SUPABASE_URL=https://your-project.supabase.co \
-e SUPABASE_KEY=your-key \
-e EMBEDDING_PROVIDER=openai \
-e OPENAI_API_KEY=sk-... \
ghcr.io/ogham-mcp/ogham-mcp
From source
git clone https://github.com/ogham-mcp/ogham-mcp.git
cd ogham-mcp
uv sync
uv run ogham --help
SSE transport (multi-agent)
By default, Ogham runs in stdio mode -- each MCP client spawns its own server process. For multiple agents sharing one server, use SSE mode:
ogham serve --transport sse --port 8742
The server runs as a persistent background process. All clients connect to the same instance -- one database pool, one embedding cache, shared memory.
Client config for SSE (any MCP client):
{
"mcpServers": {
"ogham": {
"url": "http://127.0.0.1:8742/sse"
}
}
}
Health check at http://127.0.0.1:8742/health (cached, sub-10ms).
Configure via env vars (OGHAM_TRANSPORT=sse, OGHAM_HOST, OGHAM_PORT) or CLI flags. The init wizard (ogham init) walks through SSE setup if you choose it.
Entry points
Ogham has two entry points:
ogham-- the CLI. Use this forogham init,ogham health,ogham search, and other commands you run yourself. Runningoghamwith no arguments starts the MCP server.ogham-serve-- starts the MCP server directly. This is what MCP clients should call. When you runuvx ogham-mcp, it invokesogham-serve.
CLI
ogham init # Interactive setup wizard
ogham health # Check database + embedding provider
ogham search "query" # Search memories from the terminal
ogham store "some fact" # Store a memory
ogham list # List recent memories
ogham profiles # List profiles and counts
ogham stats # Profile statistics
ogham export -o backup.json # Export memories
ogham import backup.json # Import memories
ogham cleanup # Remove expired memories
ogham hooks install # Auto-detect client + configure hooks
ogham hooks session-start # Inject project context (piped from stdin)
ogham hooks post-tool # Capture tool activity (piped from stdin)
ogham hooks inscribe # Save session context before compaction
ogham hooks recall # Restore context after compaction
ogham serve # Start MCP server (stdio, default)
ogham serve --transport sse # Start SSE server on port 8742
ogham openapi # Generate OpenAPI spec
Configuration
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_BACKEND |
No | supabase |
supabase or postgres |
SUPABASE_URL |
If supabase | -- | Your Supabase project URL |
SUPABASE_KEY |
If supabase | -- | Supabase secret key (service_role) |
DATABASE_URL |
If postgres | -- | PostgreSQL connection string |
EMBEDDING_PROVIDER |
No | ollama |
ollama, openai, mistral, or voyage |
EMBEDDING_DIM |
No | 512 |
Vector dimensions -- must match your schema (see below) |
OPENAI_API_KEY |
If openai | -- | OpenAI API key |
MISTRAL_API_KEY |
If mistral | -- | Mistral API key |
VOYAGE_API_KEY |
If voyage | -- | Voyage AI API key |
OLLAMA_URL |
No | http://localhost:11434 |
Ollama server URL |
OLLAMA_EMBED_MODEL |
No | embeddinggemma |
Ollama embedding model |
MISTRAL_EMBED_MODEL |
No | mistral-embed |
Mistral embedding model |
VOYAGE_EMBED_MODEL |
No | voyage-4-lite |
Voyage embedding model |
DEFAULT_MATCH_THRESHOLD |
No | 0.7 |
Similarity threshold (see below) |
DEFAULT_MATCH_COUNT |
No | 10 |
Max results per search |
DEFAULT_PROFILE |
No | default |
Memory profile name |
Embedding providers
| Provider | Default dimensions | Recommended threshold | Notes |
|---|---|---|---|
| OpenAI | 512 (schema default) | 0.35 | Set EMBEDDING_DIM=512 explicitly -- OpenAI defaults to 1024 |
| Ollama | 512 | 0.70 | Tight clustering, scores run 0.8-0.9 |
| Mistral | 1024 | 0.60 | Fixed 1024 dims, can't truncate. Schema must be vector(1024) |
| Voyage | 512 (schema default) | 0.45 | Moderate spread |
EMBEDDING_DIM must match the vector(N) column in your database schema. The default schema uses vector(512). If you use Mistral, you need to alter the column to vector(1024) before storing anything.
Each provider clusters vectors differently, so the similarity threshold matters. Start with the recommended value and adjust based on your results.
Temporal search
Search queries with time expressions like "last week" or "three months ago" are resolved automatically using parsedatetime -- no configuration needed. This handles roughly 80% of temporal queries at zero cost.
For expressions that parsedatetime cannot parse ("the quarter before last", "around Thanksgiving"), set TEMPORAL_LLM_MODEL to call an LLM as a fallback:
# Self-hosted with Ollama (free, local)
TEMPORAL_LLM_MODEL=ollama/llama3.2
# Cloud API
TEMPORAL_LLM_MODEL=gpt-4o-mini
Any litellm-compatible model string works -- deepseek/deepseek-chat, moonshot/moonshot-v1-8k, etc. The LLM is only called when parsedatetime fails and the query has temporal intent, so costs stay near zero.
If TEMPORAL_LLM_MODEL is empty (the default), parsedatetime handles everything on its own. Requires the litellm package (pip install litellm or install Ogham with the appropriate extra).
Lifecycle hooks
Ogham hooks inject memory context at session start and preserve it across compaction. Install for your client:
ogham hooks install
| Client | What gets installed |
|---|---|
| Claude Code | 4 hooks in ~/.claude/settings.json (SessionStart, PostToolUse, PreCompact, PostCompact) |
| Kiro | Instructions for Hook UI setup (session start + post tool) |
| Codex, Cursor, others | Project instruction file (CLAUDE.md, AGENTS.md, or .cursorrules) |
What the hooks do:
- session-start -- searches Ogham for memories relevant to your project directory, injects them as context
- post-tool -- captures meaningful tool executions as memories. Skips noise (
ls,cat,git status) and only stores signal (commits, deploys, errors, config changes). Secrets are masked before storing. - inscribe -- saves session context to Ogham before Claude compacts the conversation
- recall -- restores relevant memories after compaction so context isn't lost
Smart filtering: Hooks don't capture everything. Routine commands (ls, pwd, git add) are skipped. Only signal events (errors, deployments, commits, config changes) are stored -- typically 20-30 memories per session instead of hundreds.
Secret masking: API keys, tokens, passwords, and JWTs are automatically replaced with ***MASKED*** before storing. The event is captured ("configured Stripe API key") but the actual secret never touches the database.
MCP tools
Memory operations
| Tool | Description | Key parameters |
|---|---|---|
store_memory |
Store a new memory with embedding | content (required), source, tags[], auto_link |
store_decision |
Store an architectural decision | decision, reasoning, alternatives[], tags[] |
update_memory |
Update content of existing memory | memory_id, content, tags[] |
delete_memory |
Delete a memory by ID | memory_id |
reinforce_memory |
Increase confidence score | memory_id |
contradict_memory |
Decrease confidence score | memory_id |
Search
| Tool | Description | Key parameters |
|---|---|---|
hybrid_search |
Combined semantic + full-text search (RRF) | query, limit, tags[], graph_depth |
list_recent |
List recent memories | limit, profile |
find_related |
Find memories related to a given one | memory_id, limit |
Knowledge graph
| Tool | Description | Key parameters |
|---|---|---|
link_unlinked |
Auto-link memories by embedding similarity | threshold, limit |
explore_knowledge |
Traverse the knowledge graph | memory_id, depth, direction |
Profiles
| Tool | Description | Key parameters |
|---|---|---|
switch_profile |
Switch active memory profile | profile |
current_profile |
Show active profile | -- |
list_profiles |
List all profiles with counts | -- |
set_profile_ttl |
Set auto-expiry for a profile | profile, ttl_days |
Import / export
| Tool | Description | Key parameters |
|---|---|---|
export_profile |
Export all memories in active profile | format (json or markdown) |
import_memories_tool |
Import memories with deduplication | data, dedup_threshold |
Maintenance
| Tool | Description | Key parameters |
|---|---|---|
re_embed_all |
Re-embed all memories (after switching providers) | -- |
compress_old_memories |
Condense old inactive memories (full text to summary to tags) | -- |
cleanup_expired |
Remove expired memories (TTL) | -- |
health_check |
Check database and embedding connectivity | -- |
get_stats |
Memory counts, profiles, activity | -- |
get_cache_stats |
Embedding cache hit rates | -- |
Skills
Ogham ships with three workflow skills in skills/ that wire up common MCP tool chains. Install them in Claude Code, Cursor, or any client that supports skills.
| Skill | Triggers on | What it does |
|---|---|---|
ogham-research |
"remember this", "store this finding", "save what we learned" | Checks for duplicates via hybrid_search before storing. Auto-tags with a consistent scheme (type:decision, type:gotcha, etc.). Uses store_decision for architectural choices. |
ogham-recall |
"what do I know about X", "find related", "context for this project" | Chains hybrid_search, find_related, and explore_knowledge to surface connections. Bootstraps session context at project start. |
ogham-maintain |
"memory stats", "clean up my memory", "export my brain" | Runs health_check, get_stats, cleanup_expired, re_embed_all, link_unlinked. Warns before irreversible operations. |
Skills call existing MCP tools -- they don't replace them. The MCP server must be connected for skills to work.
Install all three with npx:
npx skills add ogham-mcp/ogham-mcp
Or install a specific skill:
npx skills add ogham-mcp/ogham-mcp --skill ogham-recall
Manual install (copy from a local clone):
cp -r skills/ogham-research skills/ogham-recall skills/ogham-maintain ~/.claude/skills/
Scoring and condensing
Ogham goes beyond storing and retrieving. Three server-side features run automatically, no configuration needed.
Novelty detection. When you store a memory, Ogham checks how similar it is to what you already have. Redundant content gets a lower novelty score and ranks quieter in search results. You can still find it, but it won't push out more useful memories.
Content signal scoring. Memories that mention decisions, errors, architecture, or contain code blocks get a higher signal score. A debug session where you fixed a real bug ranks above a casual note about a meeting. The scoring is pure regex, no LLM involved.
Automatic condensing. Old memories that nobody accesses gradually shrink. Full text becomes a summary of key sentences, then a one-line description with tags. The original is always preserved and can be restored if the memory becomes relevant again. Run compress_old_memories manually or on a schedule. High-importance and frequently-accessed memories resist condensing.
Database setup
Ogham works with Supabase or vanilla PostgreSQL. Run the schema file that matches your setup:
| File | Use case |
|---|---|
sql/schema.sql |
Supabase Cloud |
sql/schema_selfhost_supabase.sql |
Self-hosted Supabase with RLS |
sql/schema_postgres.sql |
Vanilla PostgreSQL / Neon (no RLS) |
Supabase and Neon both include pgvector out of the box -- no extra setup needed. If you're self-hosting Postgres, you need PostgreSQL 15+ with the pgvector extension installed. We develop and test against PostgreSQL 17.
For Postgres, set DATABASE_BACKEND=postgres and DATABASE_URL=postgresql://... in your environment.
Upgrading from v0.4.x
If you already have an Ogham database, run the upgrade script to add temporal columns, halfvec compression, and lz4:
# Postgres / Neon (psql required)
./sql/upgrade.sh $DATABASE_URL
# Or run migrations individually
psql $DATABASE_URL -f sql/migrations/012_temporal_columns.sql
psql $DATABASE_URL -f sql/migrations/013_halfvec_compression.sql
psql $DATABASE_URL -f sql/migrations/014_lz4_toast_compression.sql
psql $DATABASE_URL -f sql/migrations/015_temporal_auto_extract.sql
# Supabase: paste each migration file into the SQL Editor
All migrations are idempotent -- safe to re-run. The upgrade script checks your pgvector version and skips halfvec if pgvector is below 0.7.0.
New installs don't need migrations -- the schema files already include everything.
Architecture
Ogham runs as an MCP server over stdio or SSE. Your AI client connects to it like any other MCP tool.
AI Client (Claude Code, Cursor, Kiro, OpenCode, ...)
|
| stdio (MCP protocol)
|
Ogham MCP Server
|
| HTTPS (Supabase REST API) or direct connection (Postgres)
|
PostgreSQL + pgvector
Memories are stored as rows with vector embeddings. Search combines pgvector cosine similarity with PostgreSQL full-text search using Convex Combination Fusion (CCF). The Supabase backend uses postgrest-py directly (not the full Supabase SDK) for a lightweight dependency footprint.
The knowledge graph uses a memory_relationships table with recursive CTEs for traversal -- no separate graph database.
Documentation
Full docs and integration guides at ogham-mcp.dev.
Credits
Inspired by Nate B Jones and his work on persistent AI memory.
Named after Ogham, the ancient Irish alphabet carved into stone -- the original persistent memory.
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。