MCP Memory Server
A persistent local vector memory server that allows users to store and search project-specific context using LanceDB and local embeddings. It enables MCP-compliant editors to maintain long-term memory across different projects without requiring external API keys.
README
Fremem (formerly MCP Memory Server)
A persistent vector memory server for Windsurf, VS Code, and other MCP-compliant editors.
🌟 Philosophy
- Privacy-first, local-first AI memory: Your data stays on your machine.
- No vendor lock-in: Uses open standards and local files.
- Built for MCP: Designed specifically to enhance Windsurf, Cursor, and other MCP-compatible IDEs.
ℹ️ Status (v0.2.0)
Stable:
- ✅ Local MCP memory with Windsurf/Cursor
- ✅ Multi-project isolation
- ✅ Ingestion of Markdown docs
Not stable yet:
- 🚧 Auto-ingest (file watching)
- 🚧 Memory pruning
- 🚧 Remote sync
Note: There are two ways to run this server:
- Local IDE (stdio): Used by Windsurf/Cursor (default).
- Docker/Server (HTTP): Used for remote deployments or Docker (exposes port 8000).
🏥 Health Check
To verify the server binary runs correctly:
# From within the virtual environment
python -m fremem.server --help
✅ Quickstart (5-Minute Setup)
There are two ways to set this up: Global Install (recommended for ease of use) or Local Dev.
Option A: Global Install (Like npm -g)
This method allows you to run fremem from anywhere without managing virtual environments manually.
1. Install pipx (if not already installed):
MacOS (via Homebrew):
brew install pipx
pipx ensurepath
# Restart your terminal after this!
Linux/Windows: See pipx installation instructions.
2. Install fremem:
# Install from PyPI
pipx install fremem
# Verify installation
fremem --help
Configure Windsurf / VS Code:
Since pipx puts the executable in your PATH, the config is simpler:
{
"mcpServers": {
"memory": {
"command": "fremem",
"args": [],
"env": {
"MCP_MEMORY_PATH": "/Users/YOUR_USERNAME/mcp-memory-data"
}
}
}
}
Note on
MCP_MEMORY_PATH: This is wherefrememwill store its persistent database. You can point this to any directory you like (checks locally or creating it if it doesn't exist). We recommend something like~/mcp-memory-dataor~/.fremem-data. It must be an absolute path.
Option B: Local Dev Setup
1. Clone and Setup
git clone https://github.com/iamjpsharma/fremem.git
cd fremem
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies AND the package in editable mode
pip install -e .
2. Configure Windsurf / VS Code (Local Dev)
Add this to your mcpServers configuration (e.g., ~/.codeium/windsurf/mcp_config.json):
Note: Replace /ABSOLUTE/PATH/TO/fremem with the actual full path to the cloned directory.
{
"mcpServers": {
"memory": {
"command": "/ABSOLUTE/PATH/TO/fremem/.venv/bin/python",
"args": ["-m", "fremem.server"],
"env": {
"MCP_MEMORY_PATH": "/ABSOLUTE/PATH/TO/fremem/mcp_memory_data"
}
}
}
}
In local dev mode, it's common to store the data inside the repo (ignored by git), but you can use any absolute path.
🚀 Usage
0. HTTP Server (New)
You can run the server via HTTP (SSE) if you prefer:
# Run on port 8000
python -m fremem.server_http
Access the SSE endpoint at http://localhost:8000/sse and send messages to http://localhost:8000/messages.
🐳 Run with Docker
To run the server in a container:
# Build the image
docker build -t fremem .
# Run the container
# Mount your local data directory to /data inside the container
docker run -p 8000:8000 -v $(pwd)/mcp_memory_data:/data fremem
The server will be available at http://localhost:8000/sse.
1. Ingestion (Adding Context)
Use the included helper script ingest.sh to add files to a specific project.
# ingest.sh <project_name> <file1> <file2> ...
# Example: Project "Thaama"
./ingest.sh project-thaama \
docs/architecture.md \
src/main.py
# Example: Project "OpenClaw"
./ingest.sh project-openclaw \
README.md \
CONTRIBUTING.md
💡 Project ID Naming Convention
It is recommended to use a consistent prefix for your project IDs to avoid collisions:
project-thaamaproject-openclawproject-myapp
2. Connect in Editor
Once configured, the following tools will be available to the AI Assistant:
memory_search(project_id, q, filter=None): Semantic search. Supports metadata filtering (e.g.,filter={"type": "code"}). Returns distance scores.memory_add(project_id, id, text): Manual addition.memory_list_sources(project_id): specific files ingested.memory_delete_source(project_id, source): Remove a specific file.memory_stats(project_id): Get chunk count.memory_reset(project_id): Clear all memories for a project.
The AI will effectively have "long-term memory" of the files you ingested.
🛠 Troubleshooting
-
"fremem: command not found" after installing:
- This means
pipxinstalled the binary to a location not in your system's PATH (e.g.,~/.local/bin). - Fix: Run
pipx ensurepathand restart your terminal. - Manual Fix: Add
export PATH="$PATH:$HOME/.local/bin"to your shell config (e.g.,~/.zshrc).
- This means
-
"No MCP server found" or Connection errors:
- Check the output of
pwdto ensure your absolute paths inmcp_config.jsonare 100% correct. - Ensure the virtual environment (
.venv) is created and dependencies are installed.
- Check the output of
-
"Wrong project_id used":
- The AI sometimes guesses the project ID. You can explicitly tell it: "Use project_id 'project-thaama'".
-
Embedding Model Downloads:
- On the first run, the server downloads the
all-MiniLM-L6-v2model (approx 100MB). This may cause a slight delay on the first request.
- On the first run, the server downloads the
🗑️ Uninstalling
To remove fremem from your system:
If installed via pipx (Global):
pipx uninstall fremem
If installed locally (Dev): Just delete the directory.
📁 Repo Structure
/
├── src/fremem/
│ ├── server.py # Main MCP server entry point
│ ├── ingest.py # Ingestion logic
│ └── db.py # LanceDB wrapper
├── ingest.sh # Helper script
├── requirements.txt # Top-level dependencies
├── pyproject.toml # Package config
├── mcp_memory_data/ # Persistent vector storage (gitignored)
└── README.md
🗺️ Roadmap
✅ Completed (v0.1.x)
- [x] Local vector storage (LanceDB)
- [x] Multi-project isolation
- [x] Markdown ingestion
- [x] PDF ingestion
- [x] Semantic chunking strategies
- [x] Windows support + editable install fixes
- [x] HTTP transport wrapper (SSE)
- [x] Fix resource listing errors (clean MCP UX)
- [x] Robust docs + 5-minute setup
- [x] Multi-IDE support (Windsurf, Cursor-compatible MCP)
🚀 Near-Term (v0.2.x – Production Readiness)
🧠 Memory Governance
- [x] List memory sources per project
- [x] Delete memory by source (file-level deletion)
- [x] Reset memory per project
- [x] Replace / reindex mode (prevent stale chunks)
- [x] Memory stats (chunk count, last updated, size)
🎯 Retrieval Quality
- [x] Metadata filtering (e.g., type=decision | rules | context)
- [x] Similarity scoring in results
- [ ] Hybrid search (semantic + keyword)
- [ ] Return evidence + similarity scores with search results
- [ ] Configurable top_k defaults per project
⚙️ Dev Workflow
- [ ] Auto-ingest on git commit / file change
- [ ]
mcp-memory init <project-id>bootstrap command - [ ] Project templates (PROJECT_CONTEXT.md, DECISIONS.md, AI_RULES.md)
🧠 Advanced RAG (v0.3.x – Differentiators)
- [ ] Hierarchical retrieval (summary-first, detail fallback)
- [ ] Memory compression (old chunks → summaries)
- [ ] Temporal ranking (prefer newer decisions)
- [ ] Scoped retrieval (planner vs coder vs reviewer agents)
- [ ] Query rewrite / expansion for better recall
🏢 Team / SaaS Mode (Optional)
Philosophy: Local-first remains the default. SaaS is an optional deployment mode.
🔐 Auth & Multi-Tenancy
- [ ] Project-level auth (API keys or JWT)
- [ ] Org / team separation
- [ ] Audit logs for memory changes
☁️ Remote Storage Backends (Pluggable)
- [ ] S3-compatible vector store backend
- [ ] Postgres / pgvector backend
- [ ] Sync & Federation (Local ↔ Remote)
🚫 Non-Goals
- ❌ No mandatory cloud dependency
- ❌ No vendor lock-in
- ❌ No chat history as “memory” by default (signal > noise)
- ❌ No model fine-tuning
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。