mcp-memori

mcp-memori

With Memori's MCP server, your agent can retrieve relevant memories before answering and store durable facts after responding, keeping context across sessions without any SDK integration. With MCP, it can: Store stable user facts and preferences after answering using the advanced_augmentation tool Recall relevant memories before answering using the recall tool Maintain context across sessions us

Category
访问服务器

README

mcp-memori

Persistent AI memory for any MCP-compatible agent — no SDK required.

mcp-memori is the official Memori MCP server. Connect it to your AI agent to give it long-term memory: recall relevant facts before answering, store durable preferences after responding, and maintain context across sessions.


Why Memori

Without persistent memory, every session starts from zero. With Memori, your agent:

  • Remembers preferences — "I prefer Python and use uv for dependency management" is recalled in future sessions automatically
  • Personalizes responses — past context shapes every answer without manual re-prompting
  • Isolates memory by user and workflow — scoped per entity_id and process_id so preferences never bleed across users or projects
  • Works with any MCP client — no SDK, no code changes, just config

LoCoMo Benchmark

Memori was evaluated on the LoCoMo benchmark for long-conversation memory and achieved 81.95% overall accuracy while using an average of 1,294 tokens per query. That is just 4.97% of the full-context footprint, showing that structured memory can preserve reasoning quality without forcing large prompts into every request.

Compared with other retrieval-based memory systems, Memori outperformed Zep, LangMem, and Mem0 while reducing prompt size by roughly 67% vs. Zep and lowering context cost by more than 20x vs. full-context prompting.

Read the benchmark overview or download the paper.


How It Works

The server exposes two tools:

Tool When to call What it does
recall Start of each user turn Fetches relevant memories for the current query
advanced_augmentation After composing a response Stores durable facts and preferences for future sessions

Example Agent Flow

Given the message: "I prefer Python and use uv for dependency management."

  1. Agent calls recall with the user message as query
  2. Agent uses any returned facts to compose a response
  3. Agent calls advanced_augmentation with the user message and response

On a later turn — "Write a hello world script" — the agent recalls the Python + uv preference and personalizes its response automatically.


Prerequisites

  • A Memori API key from app.memorilabs.ai
  • An entity_id to identify the end user (e.g. user_123)
  • An optional process_id to identify the agent or workflow (e.g. my_agent)

Export these in your shell or replace the placeholders directly in your config:

export MEMORI_API_KEY="your-memori-api-key"
export MEMORI_ENTITY_ID="user_123"
export MEMORI_PROCESS_ID="my_agent"   # optional

Client Setup

<details> <summary><strong>Claude Code</strong></summary>

Via CLI:

claude mcp add --transport http memori https://api.memorilabs.ai/mcp/ \
  --header "X-Memori-API-Key: ${MEMORI_API_KEY}" \
  --header "X-Memori-Entity-Id: ${MEMORI_ENTITY_ID}" \
  --header "X-Memori-Process-Id: ${MEMORI_PROCESS_ID}"

Via .mcp.json (project root):

{
  "mcpServers": {
    "memori": {
      "type": "http",
      "url": "https://api.memorilabs.ai/mcp/",
      "headers": {
        "X-Memori-API-Key": "${MEMORI_API_KEY}",
        "X-Memori-Entity-Id": "${MEMORI_ENTITY_ID}",
        "X-Memori-Process-Id": "${MEMORI_PROCESS_ID}"
      }
    }
  }
}

Run /mcp inside Claude Code to verify the server status.

</details>

<details> <summary><strong>Cursor</strong></summary>

Create ~/.cursor/mcp.json (global) or .cursor/mcp.json (project-level):

{
  "mcpServers": {
    "memori": {
      "url": "https://api.memorilabs.ai/mcp/",
      "headers": {
        "X-Memori-API-Key": "${MEMORI_API_KEY}",
        "X-Memori-Entity-Id": "${MEMORI_ENTITY_ID}",
        "X-Memori-Process-Id": "${MEMORI_PROCESS_ID}"
      }
    }
  }
}

Restart Cursor after saving.

</details>

<details> <summary><strong>OpenAI Codex</strong></summary>

Add to ~/.codex/config.toml:

[mcp_servers.memori]
enabled = true
url = "https://api.memorilabs.ai/mcp/"

[mcp_servers.memori.http_headers]
X-Memori-API-Key = "${MEMORI_API_KEY}"
X-Memori-Entity-Id = "${MEMORI_ENTITY_ID}"
X-Memori-Process-Id = "${MEMORI_PROCESS_ID}"

You can also add the server from the Codex UI: Settings > MCP Servers > + Add Server.

</details>

<details> <summary><strong>Warp</strong></summary>

Add to your Warp MCP configuration:

{
  "memori": {
    "serverUrl": "https://api.memorilabs.ai/mcp/",
    "headers": {
      "X-Memori-API-Key": "your-memori-api-key",
      "X-Memori-Entity-Id": "user_123",
      "X-Memori-Process-Id": "my_agent"
    }
  }
}

</details>

<details> <summary><strong>Antigravity</strong></summary>

Open Manage MCP Servers and edit mcp_config.json:

{
  "mcpServers": {
    "memori": {
      "serverUrl": "https://api.memorilabs.ai/mcp/",
      "headers": {
        "X-Memori-API-Key": "your-memori-api-key",
        "X-Memori-Entity-Id": "user_123",
        "X-Memori-Process-Id": "my_agent"
      }
    }
  }
}

Save and restart Antigravity to refresh the tools list.

</details>

<details> <summary><strong>LangChain</strong></summary>

from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "memori": {
        "transport": "streamable_http",
        "url": "https://api.memorilabs.ai/mcp/",
        "headers": {
            "X-Memori-API-Key": "your-memori-api-key",
            "X-Memori-Entity-Id": "user_123",
            "X-Memori-Process-Id": "langchain_agent"
        }
    }
})

tools = await client.get_tools()

</details>

<details> <summary><strong>Slack</strong></summary>

Set headers dynamically per request using the Slack user ID from the event payload:

const memoriHeaders = {
  "X-Memori-API-Key": process.env.MEMORI_API_KEY,
  "X-Memori-Entity-Id": slackEvent.user,   // e.g. "U04ABCDEF"
  "X-Memori-Process-Id": "supportbot",
};

Pass these headers in every MCP tool call. Use process_id to isolate memories by workspace so preferences from personal workspaces don't bleed into team ones.

</details>

<details> <summary><strong>Notion</strong></summary>

Set entity and process IDs from the Notion API user object:

const memoriHeaders = {
  "X-Memori-API-Key": process.env.MEMORI_API_KEY,
  "X-Memori-Entity-Id": notionUser.id,
  "X-Memori-Process-Id": "notion_writing_assistant",
};

</details>


Server Details

Property Value
Endpoint https://api.memorilabs.ai/mcp/
Transport Stateless HTTP
Auth API key via request headers

Headers

Header Required Description
X-Memori-API-Key Yes Your Memori API key
X-Memori-Entity-Id Yes Stable end-user identifier (e.g. user_123)
X-Memori-Process-Id No Process, app, or workflow identifier for memory isolation

session_id is derived automatically as <entity_id>-<UTC year-month-day:hour> — you do not need to provide it.


Verifying the Connection

After configuring any client:

  1. Confirm the MCP server shows as connected in your client's UI
  2. Check that recall and advanced_augmentation appear in the tools list
  3. Send a test message — recall should return a response (even if empty for new entities)
  4. Verify advanced_augmentation returns memory being created

If you receive 401 errors, double-check your X-Memori-API-Key value. See the Troubleshooting guide for more help.


Links

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选