Local Memory MCP Server
Provides persistent local memory functionality for AI assistants, enabling them to store, retrieve, and search contextual information across conversations with SQLite-based full-text search. All data stays private on your machine while dramatically improving context retention and personalized assistance.
README
Local Memory MCP Server v2.3.0
Privacy-First AI Memory for True Intelligence
A production-ready Model Context Protocol (MCP) server that provides private, local memory for AI assistants. Your conversations, insights, and accumulated knowledge belong to YOU, but secured on your own machine, not in commercial corporate clouds.
✅ Production Status
Fully tested and production-ready with comprehensive test suite, robust error handling, performance optimization, and clean codebase.
🧠 Why Local Memory Matters
Your AI's memory IS your competitive advantage. Every interaction should compound into something uniquely yours. This transforms generic AI responses into personalized intelligence that grows with your specific needs, projects, and expertise.
🔐 Privacy & Ownership First
- Your Data, Your Control: Every memory stays on YOUR machine
- Zero Cloud Dependencies: No corporate surveillance or data mining
- Compliance Ready: Meet GDPR, HIPAA, and enterprise security requirements
🎯 Intelligence That Grows
- Cumulative Learning: AI remembers context across weeks, months, and years
- Specialized Knowledge: Build domain-specific intelligence in your field
- Pattern Recognition: Discover connections from accumulated knowledge
- Contextual Understanding: AI that truly "knows" your projects and preferences
🛠️ Available Tools
Core Memory Management
💾 store_memory
Store new memories with contextual information and automatic AI embedding generation.
content(string): The content to storeimportance(number, optional): Importance score (1-10, default: 5)tags(string[], optional): Tags for categorizationsession_id(string, optional): Session identifiersource(string, optional): Source of the information
🔍 search_memories
Search memories using full-text search or AI-powered semantic search.
query(string): Search queryuse_ai(boolean, optional): Enable AI semantic search (default: false)limit(number, optional): Maximum results (default: 10)min_importance(number, optional): Minimum importance filtersession_id(string, optional): Filter by session
✏️ update_memory
Update an existing memory by ID.
id(string): Memory ID to updatecontent(string, optional): New contentimportance(number, optional): New importance scoretags(string[], optional): New tags
🗑️ delete_memory
Delete a memory by ID.
id(string): Memory ID to delete
AI-Powered Intelligence
❓ ask_question
Ask natural language questions about your stored memories with AI-powered answers.
question(string): Your question about stored memoriessession_id(string, optional): Limit context to specific sessioncontext_limit(number, optional): Maximum memories for context (default: 5)
Returns: Detailed answer with confidence score and source memories
📊 summarize_memories
Generate AI-powered summaries and extract themes from memories.
session_id(string, optional): Summarize specific sessiontimeframe(string, optional): 'today', 'week', 'month', 'all' (default: 'all')limit(number, optional): Maximum memories to analyze (default: 10)
Returns: Comprehensive summary with key themes and patterns
🔍 analyze_memories
Discover patterns, insights, and connections in your memory collection.
query(string): Analysis focus or questionanalysis_type(string, optional): 'patterns', 'insights', 'trends', 'connections' (default: 'insights')session_id(string, optional): Analyze specific session
Returns: Detailed analysis with discovered patterns and actionable insights
Relationship & Graph Features
🕸️ discover_relationships
AI-powered discovery of connections between memories.
memory_id(string, optional): Specific memory to analyze relationships forsession_id(string, optional): Filter by sessionrelationship_types(array, optional): Types to discover ('references', 'contradicts', 'expands', 'similar', 'sequential', 'causes', 'enables')min_strength(number, optional): Minimum relationship strength (default: 0.5)limit(number, optional): Maximum relationships to discover (default: 20)
🔗 create_relationship
Manually create relationships between two memories.
source_memory_id(string): ID of the source memorytarget_memory_id(string): ID of the target memoryrelationship_type(string): Type of relationshipstrength(number, optional): Relationship strength (default: 0.8)context(string, optional): Context or explanation
🗺️ map_memory_graph
Generate graph visualization of memory relationships.
memory_id(string): Central memory for the graphdepth(number, optional): Maximum depth to traverse (default: 2)include_types(array, optional): Relationship types to include
Smart Categorization
🏷️ categorize_memory
Automatically categorize memories using AI analysis.
memory_id(string): Memory ID to categorizesuggested_categories(array, optional): Suggested category namescreate_new_categories(boolean, optional): Create new categories if needed (default: true)
📁 create_category
Create hierarchical categories for organizing memories.
name(string): Category namedescription(string): Category descriptionparent_category_id(string, optional): Parent category for hierarchyconfidence_threshold(number, optional): Auto-assignment threshold (default: 0.7)
Enhanced Temporal Analysis
📈 analyze_temporal_patterns
Analyze learning patterns and knowledge evolution over time.
session_id(string, optional): Filter by sessionconcept(string, optional): Specific concept to analyzetimeframe(string): 'week', 'month', 'quarter', 'year'analysis_type(string): 'learning_progression', 'knowledge_gaps', 'concept_evolution'
📚 track_learning_progression
Track progression stages for specific concepts or skills.
concept(string): Concept or skill to tracksession_id(string, optional): Filter by sessioninclude_suggestions(boolean, optional): Include next step suggestions (default: true)
🔍 detect_knowledge_gaps
Identify knowledge gaps and suggest learning paths.
session_id(string, optional): Filter by sessionfocus_areas(array, optional): Specific areas to focus on
📅 generate_timeline_visualization
Create timeline visualization of learning journey.
memory_ids(array, optional): Specific memory IDs to includesession_id(string, optional): Filter by sessionconcept(string, optional): Focus on specific conceptstart_date(string, optional): Timeline start dateend_date(string, optional): Timeline end date
Session Management
📋 list_sessions
List all available sessions with memory counts.
📊 get_session_stats
Get detailed statistics about stored memories.
session_id(string, optional): Specific session to analyze
Returns: Memory counts, average importance, common tags, and usage patterns
📦 Quick Setup
Install
# From source
git clone https://github.com/danieleugenewilliams/local-memory-mcp.git
cd local-memory-mcp
npm install && npm run build
# NPM (coming soon)
npm install -g local-memory-mcp
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"local-memory": {
"command": "npx",
"args": ["local-memory-mcp", "--db-path", "~/.local-memory.db"]
}
}
}
OpenCode
npx local-memory-mcp --db-path ~/.opencode-memory.db
Any MCP Tool
local-memory-mcp --db-path /path/to/memory.db --session-id your-session
🤖 AI Features Setup
Install Ollama
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Required models
ollama pull nomic-embed-text # For semantic search
ollama pull qwen2.5:7b # For Q&A and analysis
Model Options
| Model | Size | Use Case | Performance |
|---|---|---|---|
qwen2.5:7b |
~4.3GB | Recommended | ⭐⭐⭐⭐⭐ |
qwen2.5:14b |
~8GB | Best quality | ⭐⭐⭐⭐⭐ |
qwen2.5:3b |
~2GB | Balanced | ⭐⭐⭐⭐ |
phi3.5:3.8b |
~2.2GB | Efficient | ⭐⭐⭐ |
The server automatically detects Ollama and enables AI features. Without Ollama, it gracefully falls back to traditional text search.
💡 Usage Examples
Basic Operations
🗣️ "Remember that our API endpoint is https://api.example.com/v1"
🗣️ "Search for anything related to authentication"
🗣️ "What do you remember about our database schema?"
AI-Powered Features
🗣️ "Summarize what I've learned about TypeScript this week"
🗣️ "Analyze my coding patterns and suggest improvements"
🗣️ "Find relationships between my React and performance memories"
Advanced Analysis
🗣️ "Track my learning progression in machine learning"
🗣️ "What knowledge gaps do I have in backend development?"
🗣️ "Show me a timeline of my project decisions"
⚙️ Configuration
Command Line Options
--db-path: Database file path (default:~/.local-memory.db)--session-id: Session identifier for organizing memories--ollama-url: Ollama server URL (default:http://localhost:11434)--config: Configuration file path--log-level: Logging level (debug, info, warn, error)
Configuration File (~/.local-memory/config.json)
{
"database": {
"path": "~/.local-memory/memories.db",
"backupInterval": 86400000
},
"ollama": {
"enabled": true,
"baseUrl": "http://localhost:11434",
"embeddingModel": "nomic-embed-text",
"chatModel": "qwen2.5:7b"
},
"ai": {
"maxContextMemories": 10,
"minSimilarityThreshold": 0.3
}
}
Environment Variables
export MEMORY_DB_PATH="/custom/path/memories.db"
export OLLAMA_BASE_URL="http://localhost:11434"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_CHAT_MODEL="qwen2.5:7b"
🏗️ Development
npm run dev # Start development server
npm run build # Build for production
npm test # Run tests
npm run lint # Lint code
🧪 Testing
Comprehensive test suite covering:
- ✅ Memory storage and retrieval
- ✅ Full-text and semantic search
- ✅ Session management
- ✅ AI integration features
- ✅ Relationship discovery
- ✅ Temporal analysis
npm test # Run all tests
npm run test:watch # Watch mode
npm test -- --coverage # Coverage report
🏛️ Architecture
src/
├── index.ts # MCP server and CLI entry point
├── memory-store.ts # SQLite storage with caching
├── ollama-service.ts # AI service integration
├── types.ts # Schemas and TypeScript types
├── logger.ts # Structured logging
├── config.ts # Configuration management
├── performance.ts # Performance monitoring
└── __tests__/ # Comprehensive test suite
Key Features:
- SQLite + FTS5: Fast full-text search with vector embeddings
- AI Integration: Ollama for semantic search and analysis
- Performance: Caching, batch processing, monitoring
- Type Safety: Full TypeScript with runtime validation
- Production Ready: Error handling, logging, configuration
🔌 MCP Protocol Compatibility
Full Model Context Protocol (MCP) 0.5.0 compliance:
- ✅ Stdio transport standard
- ✅ All 18 memory management tools
- ✅ Structured responses and error handling
- ✅ Resource discovery and tool registration
Works with Claude Desktop, OpenCode, and any MCP-compatible tool.
🚀 Transform Your AI
Real Impact:
- Development: AI remembers your architecture, patterns, and decisions
- Research: Builds on previous insights and tracks learning progression
- Analysis: Contextual responses based on your domain expertise
- Strategy: Remembers successful approaches and methodologies
The Result: AI that evolves from generic responses to personalized intelligence built on YOUR accumulated knowledge.
🤝 Contributing
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Add tests for new functionality
- Ensure tests pass (
npm test) - Commit changes (
git commit -m 'Add amazing feature') - Push and open Pull Request
📄 License
MIT License - see LICENSE file for details.
🔄 Changelog
v2.2.0 (Current)
- ✨ Complete Ollama AI integration with semantic search
- 🕸️ Relationship discovery and graph visualization
- 🏷️ Smart categorization with AI analysis
- 📈 Enhanced temporal analysis and learning progression tracking
- 🧪 Comprehensive AI integration test suite
v2.1.0
- 🚀 Production-ready release with performance optimizations
- ✅ Comprehensive test suite and error handling
- ⚙️ Configuration management system
v1.0.0
- ✨ Initial MCP server implementation
- 🔍 SQLite FTS5 full-text search
- 📝 Session management system
🆘 Support
- Documentation: Check setup guides and examples above
- Issues: GitHub Issues
- Discussions: GitHub Discussions
🌟 Why Choose Local Memory MCP?
Because your AI's intelligence should be as unique as you are.
- 🔒 True Privacy: All data stays on your machine
- ⚡ Lightning Fast: Local SQLite + vector search
- 🧠 Semantic Understanding: AI-powered memory retrieval
- 📈 Compound Intelligence: Every interaction builds knowledge
- 🔌 Universal Compatibility: Works with any MCP tool
- 🛠️ Production Ready: Tested, optimized, and reliable
Own your AI's memory. Control your competitive advantage.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。