Memory MCP Server
SQLite-backed memory storage for MCP agents with optional semantic search via OpenAI embeddings, enabling agents to remember, recall, and manage contextual information across sessions.
README
@ideadesignmedia/memory-mcp
SQLite-backed memory for MCP agents. Ships a CLI and programmatic API.
Highlights
- Uses
sqlite3(async) for broad prebuilt support; no brittle native build steps. - Optional FTS5 indexing for better search; falls back to
LIKEwhen unavailable. - Input validation and sane limits to guard against oversized payloads.
- Auto-generates semantic embeddings via OpenAI when a key is provided; otherwise falls back to text-only scoring.
Install / Run
Quick run (no install):
npx -y @ideadesignmedia/memory-mcp --db=/abs/path/memory.db --topk=6
Install locally (dev dependency) and run:
npm i -D @ideadesignmedia/memory-mcp
npx memory-mcp --db=/abs/path/memory.db --topk=6
Other ecosystem equivalents:
- pnpm:
pnpm dlx @ideadesignmedia/memory-mcp --db=... --topk=6 - yarn (classic):
yarn dlx @ideadesignmedia/memory-mcp --db=... --topk=6
CLI usage
You can invoke it directly (if globally installed) or via npx as shown above.
Optional flags:
--embed-key=sk-...supply the embedding API key (same asMEMORY_EMBEDDING_KEY).--embed-model=text-embedding-3-smalloverride the embedding model (same asMEMORY_EMBED_MODEL).
Codex config example
Using npx so no global install is required. Add to ~/.codex/config.toml:
[mcp_servers.memory]
command = "npx"
args = ["-y", "@ideadesignmedia/memory-mcp", "--db=/abs/path/memory.db", "--topk=6"]
Programmatic API
import { MemoryStore, runStdioServer } from "@ideadesignmedia/memory-mcp";
const store = new MemoryStore("./memory.db");
// All store methods are async
const id = await store.insert({
ownerId: "user-123",
type: "preference",
subject: "favorite color",
content: "blue",
});
// Run as an MCP server over stdio
await runStdioServer({
dbPath: "./memory.db",
defaultTopK: 6,
embeddingApiKey: process.env.MEMORY_EMBEDDING_KEY, // optional
});
Tools
All tools are safe for STDIO. The server writes logs to stderr only.
-
memory-remember
- Create a concise memory for an owner. Provide
ownerId,type(slot), shortsubject, andcontent. Optionally setimportance(0–1),ttlDays,pinned,consent,sensitivity(tags), andembedding. - Response is minimal for LLMs (no embeddings or extra metadata):
{ "id": "mem_...", "item": { "id": "mem_...", "type": "preference", "subject": "favorite color", "content": "blue" }, "content": [ { "type": "text", "text": "{\"id\":\"mem_...\",\"type\":\"preference\",\"subject\":\"favorite color\",\"content\":\"blue\"}" } ] }
- Create a concise memory for an owner. Provide
-
memory-recall
- Retrieve up to
krelevant memories for an owner via text/semantic search. Accepts optional natural-languagequery, optionalembedding, and optionalslot(type). - Response is minimal per item:
{ id, type, subject, content }. - Tip: If you need to delete, use recall to find the id, then call
memory-forget.
- Retrieve up to
-
memory-list
- List recent memories for an owner, optionally filtered by
slot(type). - Response is minimal per item:
{ id, type, subject, content }.
- List recent memories for an owner, optionally filtered by
-
memory-forget
- Delete a memory by
id. Consider recalling/listing first if you need to verify the item. - Tip: Do not create a new memory to indicate "forgotten"—delete the original instead.
- Delete a memory by
-
memory-export
- Export all memories for an owner as a JSON array. Useful for backup/migration.
- Response items are minimal:
{ id, type, subject, content }.
-
memory-import
- Bulk import memories for an owner. Each item mirrors the memory schema (
type,subject,content, metadata, optionalembedding). Max 1000 items per call.
- Bulk import memories for an owner. Each item mirrors the memory schema (
Embeddings
Embeddings
Embeddings are optional—without a key the server relies on text search and recency heuristics.
Set MEMORY_EMBEDDING_KEY (or pass --embed-key=... to the CLI) to automatically create embeddings when remembering/importing memories and to embed recall queries. The default model is text-embedding-3-small; override it with MEMORY_EMBED_MODEL or --embed-model. To disable the built-in generator when using the programmatic API, pass embeddingProvider: null to createMemoryMcpServer. To specify a key programmatically, pass embeddingApiKey: "sk-...".
Limits and validation
- memory-remember:
subjectmax 160 chars,contentmax 1000,sensitivityup to 32 tags. - memory-recall: optional
querymax 1000 chars; if omitted, listing is capped internally. - memory-import: up to 1000 items per call; each item has the same field limits as remember.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。
mcp-server-qdrant
这个仓库展示了如何为向量搜索引擎 Qdrant 创建一个 MCP (Managed Control Plane) 服务器的示例。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。