
MCP AI Memory
Enables AI agents to store, retrieve, and manage contextual knowledge across sessions using semantic search with PostgreSQL and vector embeddings. Supports memory relationships, clustering, multi-agent isolation, and intelligent caching for persistent conversational context.
README
MCP AI Memory
A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.
Features
- TypeScript - Full type safety with strict mode
- PostgreSQL + pgvector - Vector similarity search with HNSW indexing
- Kysely ORM - Type-safe SQL queries
- Local Embeddings - Uses Transformers.js (no API calls)
- Intelligent Caching - Redis + in-memory fallback for blazing fast performance
- Multi-Agent Support - User context isolation
- Memory Relationships - Graph structure for connected knowledge
- Soft Deletes - Data recovery with deleted_at timestamps
- Clustering - Automatic memory consolidation
- Token Efficient - Embeddings removed from responses
Prerequisites
- Node.js 18+ or Bun
- PostgreSQL with pgvector extension
- Redis (optional - falls back to in-memory cache if not available)
Installation
NPM Package (Recommended for Claude Desktop)
npm install -g mcp-ai-memory
From Source
- Install dependencies:
bun install
- Set up PostgreSQL with pgvector:
CREATE DATABASE mcp_ai_memory;
\c mcp_ai_memory
CREATE EXTENSION IF NOT EXISTS vector;
- Create environment file:
# Create .env with your database credentials
touch .env
- Run migrations:
bun run migrate
Usage
Development
bun run dev
Production
bun run build
bun run start
Troubleshooting
Embedding Dimension Mismatch Error
If you see an error like:
Failed to generate embedding: Error: Embedding dimension mismatch: Model produces 384-dimensional embeddings, but database expects 768
This occurs when the embedding model changes between sessions. To fix:
-
Option 1: Reset and Re-embed (Recommended for new installations)
# Clear existing memories and start fresh psql -d your_database -c "TRUNCATE TABLE memories CASCADE;"
-
Option 2: Specify a Consistent Model Add
EMBEDDING_MODEL
to your Claude Desktop config:{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "MEMORY_DB_URL": "postgresql://...", "EMBEDDING_MODEL": "Xenova/all-mpnet-base-v2" } } } }
Common models:
Xenova/all-mpnet-base-v2
(768 dimensions - default, best quality)Xenova/all-MiniLM-L6-v2
(384 dimensions - smaller/faster)
-
Option 3: Run Migration for Flexible Dimensions If you're using the source version:
bun run migrate
This allows mixing different embedding dimensions in the same database.
Database Connection Issues
Ensure your PostgreSQL has the pgvector extension:
CREATE EXTENSION IF NOT EXISTS vector;
Claude Desktop Integration
Quick Setup (NPM)
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json
on macOS):
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "mcp-ai-memory"],
"env": {
"DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db"
}
}
}
}
With Optional Redis Cache
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "mcp-ai-memory"],
"env": {
"DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db",
"REDIS_URL": "redis://localhost:6379",
"EMBEDDING_MODEL": "Xenova/all-MiniLM-L6-v2",
"LOG_LEVEL": "info"
}
}
}
}
Environment Variables
Variable | Description | Default |
---|---|---|
DATABASE_URL |
PostgreSQL connection string | Required |
REDIS_URL |
Redis connection string (optional) | None - uses in-memory cache |
EMBEDDING_MODEL |
Transformers.js model | Xenova/all-MiniLM-L6-v2 |
LOG_LEVEL |
Logging level | info |
CACHE_TTL |
Cache TTL in seconds | 3600 |
MAX_MEMORIES_PER_QUERY |
Max results per search | 10 |
MIN_SIMILARITY_SCORE |
Min similarity threshold | 0.5 |
Available Tools
Core Operations
memory_store
- Store memories with embeddingsmemory_search
- Semantic similarity searchmemory_list
- List memories with filteringmemory_update
- Update memory metadatamemory_delete
- Delete memories
Advanced Operations
memory_batch
- Bulk store memoriesmemory_batch_delete
- Bulk delete memories by IDsmemory_graph_search
- Traverse relationshipsmemory_consolidate
- Cluster similar memoriesmemory_stats
- Database statistics
Resources
memory://stats
- Database statisticsmemory://types
- Available memory typesmemory://tags
- All unique tagsmemory://relationships
- Memory relationshipsmemory://clusters
- Memory clusters
Prompts
load-context
- Load relevant context for a taskmemory-summary
- Generate topic summariesconversation-context
- Load conversation history
Architecture
src/
├── server.ts # MCP server implementation
├── types/ # TypeScript definitions
├── schemas/ # Zod validation schemas
├── services/ # Business logic
├── database/ # Kysely migrations and client
└── config/ # Configuration management
Environment Variables
# Required
MEMORY_DB_URL=postgresql://user:password@localhost:5432/mcp_ai_memory
# Optional - Caching (falls back to in-memory if Redis unavailable)
REDIS_URL=redis://localhost:6379
CACHE_TTL=3600 # 1 hour default cache
EMBEDDING_CACHE_TTL=86400 # 24 hours for embeddings
SEARCH_CACHE_TTL=3600 # 1 hour for search results
MEMORY_CACHE_TTL=7200 # 2 hours for individual memories
# Optional - Model & Performance
EMBEDDING_MODEL=Xenova/all-mpnet-base-v2
LOG_LEVEL=info
MAX_CONTENT_SIZE=1048576
DEFAULT_SEARCH_LIMIT=20
DEFAULT_SIMILARITY_THRESHOLD=0.7
# Optional - Async Processing (requires Redis)
ENABLE_ASYNC_PROCESSING=true # Enable background job processing
BULL_CONCURRENCY=3 # Worker concurrency
ENABLE_REDIS_CACHE=true # Enable Redis caching
Caching Architecture
The server implements a two-tier caching strategy:
- Redis Cache (if available) - Distributed, persistent caching
- In-Memory Cache (fallback) - Local NodeCache for when Redis is unavailable
Async Job Processing
When Redis is available and ENABLE_ASYNC_PROCESSING=true
, the server uses BullMQ for background job processing:
Features
- Async Embedding Generation: Offloads CPU-intensive embedding generation to background workers
- Batch Import: Processes large memory imports without blocking the main server
- Memory Consolidation: Runs clustering and merging operations in the background
- Automatic Retries: Failed jobs are retried with exponential backoff
- Dead Letter Queue: Permanently failed jobs are tracked for manual intervention
Running Workers
# Start all workers
bun run workers
# Or start individual workers
bun run worker:embedding # Embedding generation worker
bun run worker:batch # Batch import and consolidation worker
# Test async processing
bun run test:async
Queue Monitoring
The memory_stats
tool includes queue statistics when async processing is enabled:
- Active, waiting, completed, and failed job counts
- Processing rates and performance metrics
- Worker health status
Cache Invalidation
- Memory updates/deletes automatically invalidate relevant caches
- Search results are cached with query+filter combinations
- Embeddings are cached for 24 hours (configurable)
Development
Type Checking
bun run typecheck
Linting
bun run lint
Implementation Status
✅ Fully Integrated Features
- DBSCAN Clustering: Advanced clustering algorithm for memory consolidation
- Smart Compression: Automatic compression for large memories (>100KB)
- Context Window Management: Token counting and intelligent truncation
- Input Sanitization: Comprehensive validation and sanitization
- All Workers Active: Embedding, batch, and clustering workers all operational
Testing
The project includes a comprehensive test suite covering:
- Memory service operations (store, search, update, delete)
- Input validation and sanitization
- Clustering and consolidation
- Compression for large content
Run tests with bun test
.
License
MIT
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。