memora

memora

Persistent memory with knowledge graph visualization, semantic/hybrid search, importance scoring, and cloud sync (S3/R2) for cross-session context management.

Category
访问服务器

README

<img src="media/memora.gif" width="60" align="left" alt="Memora Demo">

Memora

<br clear="left">

A lightweight Model Context Protocol (MCP) server that persists shared memories in SQLite. Compatible with Claude Code, Codex CLI, and other MCP-aware clients.

<img src="media/viz_graph_exp.png" alt="Knowledge Graph Visualization" width="600">

Features

  • Persistent Storage - SQLite-backed database with optional cloud sync (S3, GCS, Azure)
  • Semantic Search - Vector embeddings (TF-IDF, sentence-transformers, or OpenAI)
  • Event Notifications - Poll-based system for inter-agent communication
  • Advanced Queries - Full-text search, date ranges, tag filters (AND/OR/NOT)
  • Cross-references - Auto-linked related memories based on similarity
  • Hierarchical Organization - Explore memories by section/subsection
  • Export/Import - Backup and restore with merge strategies
  • Knowledge Graph - Interactive HTML visualization with filtering
  • Live Graph Server - Auto-starts HTTP server for remote access via SSH
  • Statistics & Analytics - Tag usage, trends, and connection insights
  • Zero Dependencies - Works out-of-box with Python stdlib (optional backends available)

Install

# From GitHub
pip install git+https://github.com/agentic-mcp-tools/memora.git

# With extras
pip install -e ".[cloud]"       # S3/R2/GCS cloud storage (boto3)
pip install -e ".[embeddings]"  # semantic search (sentence-transformers)
pip install -e ".[all]"         # cloud + embeddings + dev tools

Usage

The server runs automatically when configured in Claude Code. Manual invocation:

# Default (stdio mode for MCP)
memora-server

# With graph visualization server
memora-server --graph-port 8765

# HTTP transport (alternative to stdio)
memora-server --transport streamable-http --host 127.0.0.1 --port 8080

Claude Code Config

Add to .mcp.json in your project root:

Local DB

{
  "mcpServers": {
    "memory": {
      "command": "memora-server",
      "args": [],
      "env": {
        "MEMORA_DB_PATH": "~/.local/share/memora/memories.db",
        "MEMORA_ALLOW_ANY_TAG": "1",
        "MEMORA_GRAPH_PORT": "8765"
      }
    }
  }
}

Cloud DB (S3/R2)

{
  "mcpServers": {
    "memory": {
      "command": "memora-server",
      "args": [],
      "env": {
        "AWS_PROFILE": "memora",
        "AWS_ENDPOINT_URL": "https://<account-id>.r2.cloudflarestorage.com",
        "MEMORA_STORAGE_URI": "s3://memories/memories.db",
        "MEMORA_CLOUD_ENCRYPT": "true",
        "MEMORA_ALLOW_ANY_TAG": "1",
        "MEMORA_GRAPH_PORT": "8765"
      }
    }
  }
}

Codex CLI Config

Add to ~/.codex/config.toml:

[mcp_servers.memory]
  command = "memora-server"  # or full path: /path/to/bin/memora-server
  args = ["--no-graph"]
  env = {
    AWS_PROFILE = "memora",
    AWS_ENDPOINT_URL = "https://<account-id>.r2.cloudflarestorage.com",
    MEMORA_STORAGE_URI = "s3://memories/memories.db",
    MEMORA_CLOUD_ENCRYPT = "true",
    MEMORA_ALLOW_ANY_TAG = "1",
  }

Environment Variables

Variable Description
MEMORA_DB_PATH Local SQLite database path (default: ~/.local/share/memora/memories.db)
MEMORA_STORAGE_URI Cloud storage URI for S3/R2 (e.g., s3://bucket/memories.db)
MEMORA_CLOUD_ENCRYPT Encrypt database before uploading to cloud (true/false)
MEMORA_CLOUD_COMPRESS Compress database before uploading to cloud (true/false)
MEMORA_CACHE_DIR Local cache directory for cloud-synced database
MEMORA_ALLOW_ANY_TAG Allow any tag without validation against allowlist (1 to enable)
MEMORA_TAG_FILE Path to file containing allowed tags (one per line)
MEMORA_TAGS Comma-separated list of allowed tags
MEMORA_GRAPH_PORT Port for the knowledge graph visualization server (default: 8765)
MEMORA_EMBEDDING_MODEL Embedding backend: tfidf (default), sentence-transformers, or openai
SENTENCE_TRANSFORMERS_MODEL Model for sentence-transformers (default: all-MiniLM-L6-v2)
OPENAI_API_KEY API key for OpenAI embeddings (required when using openai backend)
OPENAI_EMBEDDING_MODEL OpenAI embedding model (default: text-embedding-3-small)
AWS_PROFILE AWS credentials profile from ~/.aws/credentials (useful for R2)
AWS_ENDPOINT_URL S3-compatible endpoint for R2/MinIO
R2_PUBLIC_DOMAIN Public domain for R2 image URLs

Semantic Search & Embeddings

Memora supports three embedding backends for semantic search:

Backend Install Quality Speed
tfidf (default) None Basic keyword matching Fast
sentence-transformers pip install sentence-transformers True semantic understanding Medium
openai pip install openai High quality API latency

Automatic: Embeddings and cross-references are computed automatically when you memory_create, memory_update, or memory_create_batch.

Manual rebuild required when:

  • Changing MEMORA_EMBEDDING_MODEL after memories exist
  • Switching to a different sentence-transformers model
# After changing embedding model, rebuild all embeddings
memory_rebuild_embeddings

# Then rebuild cross-references to update the knowledge graph
memory_rebuild_crossrefs

Neovim Integration

Browse memories directly in Neovim with Telescope. Copy the plugin to your config:

# For kickstart.nvim / lazy.nvim
cp nvim/memora.lua ~/.config/nvim/lua/kickstart/plugins/

Usage: Press <leader>sm to open the memory browser with fuzzy search and preview.

Requires: telescope.nvim, plenary.nvim, and memora installed in your Python environment.

Knowledge Graph Export

Export memories as an interactive HTML knowledge graph visualization:

# Via MCP tool
memory_export_graph(output_path="~/memories_graph.html", min_score=0.25)

Interactive vis.js graph with tag/section filtering, memory tooltips, Mermaid diagram rendering, and auto-resized image thumbnails. Click nodes to view content, drag to explore.

Live Graph Server

A built-in HTTP server starts automatically with the MCP server, serving the graph visualization on-demand.

Access locally:

http://localhost:8765/graph

Remote access via SSH:

ssh -L 8765:localhost:8765 user@remote
# Then open http://localhost:8765/graph in your browser

Configuration:

{
  "env": {
    "MEMORA_GRAPH_PORT": "8765"
  }
}

Use different ports on different machines to avoid conflicts when forwarding multiple servers.

To disable: add "--no-graph" to args in your MCP config.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选