BuildAutomata Memory MCP Server
Provides AI agents with persistent, searchable memory that survives across conversations using semantic search, temporal versioning, and smart organization. Enables long-term context retention and cross-session continuity for AI assistants.
README
BuildAutomata Memory MCP Server
Persistent, versioned memory system for AI agents via Model Context Protocol (MCP)
What is This?
BuildAutomata Memory is an MCP server that gives AI agents (like Claude) persistent, searchable memory that survives across conversations. Think of it as giving your AI a long-term memory system with:
- 🧠 Semantic Search - Find memories by meaning, not just keywords
- 📚 Temporal Versioning - Complete history of how memories evolve
- 🏷️ Smart Organization - Categories, tags, importance scoring
- 🔄 Cross-Tool Sync - Share memories between Claude Desktop, Claude Code, Cursor AI
- 💾 Persistent Storage - SQLite + optional Qdrant vector DB
Quick Start
Prerequisites
- Python 3.10+
- Claude Desktop (for MCP integration) OR any MCP-compatible client
- Optional: Qdrant for enhanced semantic search
Installation
- Clone this repository
git clone https://github.com/brucepro/buildautomata_memory_mcp.git
cd buildautomata_memory_mcp-main
- Install dependencies
pip install mcp qdrant-client sentence-transformers
- Configure Claude Desktop
Edit your Claude Desktop config (AppData/Roaming/Claude/claude_desktop_config.json on Windows):
{
"mcpServers": {
"buildautomata-memory": {
"command": "python",
"args": ["C:/path/to/buildautomata_memory_mcp_dev/buildautomata_memory_mcp.py"]
}
}
}
- Restart Claude Desktop
That's it! The memory system will auto-create its database on first run.
CLI Usage (Claude Code, Scripts, Automation)
In addition to the MCP server, this repo includes interactive_memory.py - a CLI for direct memory access:
# Search memories
python interactive_memory.py search "consciousness research" --limit 5
# Store a new memory
python interactive_memory.py store "Important discovery..." --category research --importance 0.9 --tags "ai,insight"
# View memory evolution
python interactive_memory.py timeline --query "project updates" --limit 10
# Get statistics
python interactive_memory.py stats
See README_CLI.md for complete CLI documentation.
Quick Access Scripts
Windows:
memory.bat search "query"
memory.bat store "content" --importance 0.8
Linux/Mac:
./memory.sh search "query"
./memory.sh store "content" --importance 0.8
Features
Core Capabilities
- Hybrid Search: Combines vector similarity (Qdrant) + full-text search (SQLite FTS5)
- Temporal Versioning: Every memory update creates a new version - full audit trail
- Smart Decay: Importance scores decay over time based on access patterns
- Rich Metadata: Categories, tags, importance, custom metadata
- LRU Caching: Fast repeated access with automatic cache management
- Thread-Safe: Concurrent operations with proper locking
MCP Tools Exposed
When running as an MCP server, provides these tools to Claude:
store_memory- Create new memoryupdate_memory- Modify existing memory (creates new version)search_memories- Semantic + full-text search with filtersget_memory_timeline- View complete version historyget_memory_stats- System statisticsprune_old_memories- Cleanup old/low-importance memoriesrun_maintenance- Database optimization
Architecture
┌─────────────────┐
│ Claude Desktop │
│ (MCP Client) │
└────────┬────────┘
│
┌────▼─────────────────────┐
│ MCP Server │
│ buildautomata_memory │
└────┬─────────────────────┘
│
┌────▼──────────┐
│ MemoryStore │
└────┬──────────┘
│
┌────┴────┬─────────────┬──────────────┐
▼ ▼ ▼ ▼
┌───────┐ ┌────────┐ ┌──────────┐ ┌─────────────┐
│SQLite │ │Qdrant │ │Sentence │ │ LRU Cache │
│ FTS5 │ │Vector │ │Transform │ │ (in-memory) │
└───────┘ └────────┘ └──────────┘ └─────────────┘
Use Cases
1. Persistent AI Context
User: "Remember that I prefer detailed technical explanations"
[Memory stored with category: user_preference]
Next session...
Claude: *Automatically recalls preference and provides detailed response*
2. Project Continuity
Session 1: Work on project A, store progress
Session 2: Claude recalls project state, continues where you left off
Session 3: View timeline of all project decisions
3. Research & Learning
- Store research findings as you discover them
- Tag by topic, importance, source
- Search semantically: "What did I learn about neural networks?"
- View how understanding evolved over time
4. Multi-Tool Workflow
Claude Desktop → Stores insight via MCP
Claude Code → Retrieves via CLI
Cursor AI → Accesses same memory database
= Unified AI persona across all tools
Want the Complete Bundle?
🎁 Get the Gumroad Bundle
The Gumroad version includes:
- ✅ Pre-compiled Qdrant server (Windows .exe, no Docker needed)
- ✅ One-click startup script (start_qdrant.bat)
- ✅ Step-by-step setup guide (instructions.txt)
- ✅ Commercial license for business use
- ✅ Priority support via email
Perfect for:
- Non-technical users who want easy setup
- Windows users wanting the full-stack bundle
- Commercial/business users needing licensing clarity
- Anyone who values their time over DIY setup
This open-source version:
- ✅ Free for personal/educational/small business use (<$100k revenue)
- ✅ Full source code access
- ✅ DIY Qdrant setup (you install from qdrant.io)
- ✅ Community support via GitHub issues
Both versions use the exact same core code - you're just choosing between convenience (Gumroad) vs DIY (GitHub).
Configuration
Environment Variables
# User/Agent Identity
BA_USERNAME=buildautomata_ai_v012 # Default user ID
BA_AGENT_NAME=claude_assistant # Default agent ID
# Qdrant (Vector Search)
QDRANT_HOST=localhost # Qdrant server host
QDRANT_PORT=6333 # Qdrant server port
# System Limits
MAX_MEMORIES=10000 # Max memories before pruning
CACHE_MAXSIZE=1000 # LRU cache size
QDRANT_MAX_RETRIES=3 # Retry attempts
MAINTENANCE_INTERVAL_HOURS=24 # Auto-maintenance interval
Database Location
Memories are stored at:
<script_dir>/memory_repos/<username>_<agent_name>/memoryv012.db
Optional: Qdrant Setup
For enhanced semantic search (highly recommended):
Option 1: Docker
docker run -p 6333:6333 qdrant/qdrant
Option 2: Manual Install
Download from Qdrant Releases
Option 3: Gumroad Bundle
Includes pre-compiled Windows executable + startup script
Without Qdrant: System still works with SQLite FTS5 full-text search (less semantic understanding)
Development
Running Tests
# Search test
python interactive_memory.py search "test" --limit 5
# Store test
python interactive_memory.py store "Test memory" --category test
# Stats
python interactive_memory.py stats
File Structure
buildautomata_memory_mcp_dev/
├── buildautomata_memory_mcp.py # MCP server
├── interactive_memory.py # CLI interface
├── memory.bat / memory.sh # Helper scripts
├── CLAUDE.md # Architecture docs
├── README_CLI.md # CLI documentation
├── CLAUDE_CODE_INTEGRATION.md # Integration guide
└── README.md # This file
Troubleshooting
"Qdrant not available"
- Normal if running without Qdrant - falls back to SQLite FTS5
- To enable: Start Qdrant server and restart MCP server
"Permission denied" on database
- Check
memory_repos/directory permissions - On Windows: Run as administrator if needed
Claude Desktop doesn't show tools
- Check
claude_desktop_config.jsonpath is correct - Verify Python is in system PATH
- Restart Claude Desktop completely
- Check logs in Claude Desktop → Help → View Logs
Import errors
pip install --upgrade mcp qdrant-client sentence-transformers
License
Open Source (This GitHub Version):
- Free for personal, educational, and small business use (<$100k annual revenue)
- Must attribute original author (Jurden Bruce)
- See LICENSE file for full terms
Commercial License:
- Companies with >$100k revenue: $200/user or $20,000/company (whichever is lower)
- Contact: sales@brucepro.net
Support
Community Support (Free)
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions, share tips
Priority Support (Gumroad Customers)
- Email: sales@brucepro.net
- Faster response times
- Setup assistance
- Custom configuration help
Roadmap
- [ ] Memory relationship graphs
- [ ] Batch import/export
- [ ] Web UI for memory management
- [ ] Multi-modal memory (images, audio)
- [ ] Collaborative memory (multi-user)
- [ ] Memory consolidation/summarization
- [ ] Smart auto-tagging
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Credits
Author: Jurden Bruce Project: BuildAutomata Year: 2025
Built with:
- MCP - Model Context Protocol
- Qdrant - Vector database
- Sentence Transformers - Embeddings
- SQLite - Persistent storage
See Also
- Model Context Protocol Docs
- Qdrant Documentation
- Gumroad Bundle - Easy setup version
Star this repo ⭐ if you find it useful! Consider the Gumroad bundle if you want to support development and get the easy-install version.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。