Memory MCP Server

Memory MCP Server

SQLite-backed memory storage for MCP agents with optional semantic search via OpenAI embeddings, enabling agents to remember, recall, and manage contextual information across sessions.

Category
访问服务器

README

@ideadesignmedia/memory-mcp

SQLite-backed memory for MCP agents. Ships a CLI and programmatic API.

Highlights

  • Uses sqlite3 (async) for broad prebuilt support; no brittle native build steps.
  • Optional FTS5 indexing for better search; falls back to LIKE when unavailable.
  • Input validation and sane limits to guard against oversized payloads.
  • Auto-generates semantic embeddings via OpenAI when a key is provided; otherwise falls back to text-only scoring.

Install / Run

Quick run (no install):

npx -y @ideadesignmedia/memory-mcp --db=/abs/path/memory.db --topk=6

Install locally (dev dependency) and run:

npm i -D @ideadesignmedia/memory-mcp
npx memory-mcp --db=/abs/path/memory.db --topk=6

Other ecosystem equivalents:

  • pnpm: pnpm dlx @ideadesignmedia/memory-mcp --db=... --topk=6
  • yarn (classic): yarn dlx @ideadesignmedia/memory-mcp --db=... --topk=6

CLI usage

You can invoke it directly (if globally installed) or via npx as shown above.

Optional flags:

  • --embed-key=sk-... supply the embedding API key (same as MEMORY_EMBEDDING_KEY).
  • --embed-model=text-embedding-3-small override the embedding model (same as MEMORY_EMBED_MODEL).

Codex config example

Using npx so no global install is required. Add to ~/.codex/config.toml:

[mcp_servers.memory]
command = "npx"
args = ["-y", "@ideadesignmedia/memory-mcp", "--db=/abs/path/memory.db", "--topk=6"]

Programmatic API

import { MemoryStore, runStdioServer } from "@ideadesignmedia/memory-mcp";

const store = new MemoryStore("./memory.db");
// All store methods are async
const id = await store.insert({
  ownerId: "user-123",
  type: "preference",
  subject: "favorite color",
  content: "blue",
});

// Run as an MCP server over stdio
await runStdioServer({
  dbPath: "./memory.db",
  defaultTopK: 6,
  embeddingApiKey: process.env.MEMORY_EMBEDDING_KEY, // optional
});

Tools

All tools are safe for STDIO. The server writes logs to stderr only.

  • memory-remember

    • Create a concise memory for an owner. Provide ownerId, type (slot), short subject, and content. Optionally set importance (0–1), ttlDays, pinned, consent, sensitivity (tags), and embedding.
    • Response is minimal for LLMs (no embeddings or extra metadata):
      {
        "id": "mem_...",
        "item": { "id": "mem_...", "type": "preference", "subject": "favorite color", "content": "blue" },
        "content": [ { "type": "text", "text": "{\"id\":\"mem_...\",\"type\":\"preference\",\"subject\":\"favorite color\",\"content\":\"blue\"}" } ]
      }
      
  • memory-recall

    • Retrieve up to k relevant memories for an owner via text/semantic search. Accepts optional natural-language query, optional embedding, and optional slot (type).
    • Response is minimal per item: { id, type, subject, content }.
    • Tip: If you need to delete, use recall to find the id, then call memory-forget.
  • memory-list

    • List recent memories for an owner, optionally filtered by slot (type).
    • Response is minimal per item: { id, type, subject, content }.
  • memory-forget

    • Delete a memory by id. Consider recalling/listing first if you need to verify the item.
    • Tip: Do not create a new memory to indicate "forgotten"—delete the original instead.
  • memory-export

    • Export all memories for an owner as a JSON array. Useful for backup/migration.
    • Response items are minimal: { id, type, subject, content }.
  • memory-import

    • Bulk import memories for an owner. Each item mirrors the memory schema (type, subject, content, metadata, optional embedding). Max 1000 items per call.

Embeddings

Embeddings

Embeddings are optional—without a key the server relies on text search and recency heuristics.

Set MEMORY_EMBEDDING_KEY (or pass --embed-key=... to the CLI) to automatically create embeddings when remembering/importing memories and to embed recall queries. The default model is text-embedding-3-small; override it with MEMORY_EMBED_MODEL or --embed-model. To disable the built-in generator when using the programmatic API, pass embeddingProvider: null to createMemoryMcpServer. To specify a key programmatically, pass embeddingApiKey: "sk-...".

Limits and validation

  • memory-remember: subject max 160 chars, content max 1000, sensitivity up to 32 tags.
  • memory-recall: optional query max 1000 chars; if omitted, listing is capped internally.
  • memory-import: up to 1000 items per call; each item has the same field limits as remember.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选
mcp-server-qdrant

mcp-server-qdrant

这个仓库展示了如何为向量搜索引擎 Qdrant 创建一个 MCP (Managed Control Plane) 服务器的示例。

官方
精选
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选