MCP-Mem0
A template implementation of the Model Context Protocol server that integrates with Mem0 to provide AI agents with persistent memory capabilities for storing, retrieving, and searching memories using semantic search.
README
y# 🐕 PomPom-AI: Intelligent Memory System for Qodo AI
PomPom-AI (PomPom Artificial Intelligence) - A smart MCP (Model Context Protocol) server that provides persistent memory capabilities for Qodo AI. Just like Pompompurin's friendly and reliable nature, PomPom-AI remembers everything important and helps your AI assistant provide personalized, intelligent responses across all conversations.
🎯 Personal Setup for Qodo AI Integration
This repository is configured for personal use with Qodo AI, providing long-term memory storage and retrieval capabilities.
Qodo AI MCP Configuration
{
"pompom-ai": {
"url": "http://localhost:8051/sse"
}
}
🚀 Quick Start Guide
Prerequisites
- Python 3.12+
- OpenRouter API key (for Claude 3.7 Sonnet)
- Supabase PostgreSQL database (configured)
Installation
-
Clone and setup:
git clone <your-repo-url> cd pompom-ai pip install -e . -
Configure environment: Copy
.env.exampleto.envand update with your credentials:TRANSPORT=sse HOST=0.0.0.0 PORT=8051 LLM_PROVIDER=openrouter LLM_BASE_URL=https://openrouter.ai/api/v1 LLM_API_KEY=your-openrouter-api-key LLM_CHOICE=anthropic/claude-3.7-sonnet DATABASE_URL=your-supabase-postgresql-url -
Start the server:
python src/main.py -
Test connectivity:
.\test_server.ps1
🧠 How It Works - Detailed Explanation
Architecture Overview
Qodo AI ←→ MCP Protocol ←→ PomPom-AI Server ←→ Mem0 ←→ ChromaDB + PostgreSQL
Component Breakdown
1. MCP Server (src/main.py)
- FastMCP Framework: Handles MCP protocol communication
- SSE Transport: Server-Sent Events for real-time communication on port 8051
- Lifespan Management: Initializes and manages Mem0 client connection
- Three Core Tools: Exposes memory operations to Qodo AI
2. Memory Tools Available to Qodo AI
save_memory(text: str)
- Purpose: Store any information in long-term memory
- Usage: When you tell Qodo AI something important to remember
- Process:
- Receives text from Qodo AI
- Processes through Claude 3.7 Sonnet for fact extraction
- Generates embeddings using ChromaDB's built-in model
- Stores in both ChromaDB (vectors) and PostgreSQL (metadata)
get_all_memories()
- Purpose: Retrieve all stored memories for context
- Usage: When Qodo AI needs complete memory context
- Process:
- Queries Mem0 for all memories associated with default user
- Returns paginated results (50 items default)
- Provides full context for conversation continuity
search_memories(query: str, limit: int = 3)
- Purpose: Find relevant memories using semantic search
- Usage: When Qodo AI needs specific information
- Process:
- Converts query to embeddings
- Performs vector similarity search in ChromaDB
- Returns most relevant memories ranked by relevance
3. Memory Configuration (src/utils.py)
LLM Configuration (OpenRouter + Claude)
llm_config = {
"provider": "openai", # OpenRouter uses OpenAI-compatible API
"config": {
"model": "anthropic/claude-3.7-sonnet",
"temperature": 0.2, # Low temperature for consistent memory processing
"max_tokens": 1500
}
}
Embedding Configuration (ChromaDB Built-in)
- No external API calls: Uses ChromaDB's default embedding function
- Local processing: Embeddings generated locally for privacy
- No additional costs: No embedding API fees
Vector Store Configuration (ChromaDB)
vector_store_config = {
"provider": "chroma",
"config": {
"collection_name": "mem0_memories",
"path": "./chroma_db" # Local SQLite database
}
}
4. Data Flow When You Use Qodo AI
Saving a Memory:
You: "Remember that I prefer PowerShell for automation tasks"
↓
Qodo AI → MCP Protocol → PomPom-AI → save_memory("I prefer PowerShell for automation tasks")
↓
Claude 3.7 Sonnet processes and extracts key facts
↓
ChromaDB generates embeddings locally
↓
Stored in: ChromaDB (vectors) + PostgreSQL (metadata)
↓
PomPom-AI Response: "Successfully saved memory: I prefer PowerShell for automation tasks"
Retrieving Memories:
You: "What do you know about my preferences?"
↓
Qodo AI → MCP Protocol → PomPom-AI → search_memories("preferences", limit=5)
↓
ChromaDB performs vector similarity search
↓
PomPom-AI returns relevant memories about your preferences
↓
Qodo AI uses this context to provide personalized responses
5. Storage Architecture
ChromaDB (Local - ./chroma_db/)
- Vector embeddings: Semantic representations of memories
- Fast similarity search: Sub-second query responses
- Local SQLite: No external dependencies
- Collection:
mem0_memories
PostgreSQL (Supabase)
- Metadata storage: User associations, timestamps
- Structured data: Relationships and memory organization
- Cloud backup: Persistent storage across devices
- Scalability: Handles large memory datasets
🔧 Memory Management Tools
View Current Memories
# Python script
python show_current_memories.py
# PowerShell script
.\show_memories.ps1
Visual Dashboard
# Streamlit dashboard
streamlit run chroma_viewer.py
# HTML dashboard with live data
python dashboard_server.py
Server Testing
# Test server connectivity
.\test_server.ps1
📊 Memory Analytics
The system tracks:
- Total memories stored
- Memory categories/collections
- Average memory length
- Search frequency patterns
- Memory creation timestamps
🔒 Privacy & Security
- Local embeddings: No data sent to external embedding APIs
- Encrypted storage: PostgreSQL with SSL
- Local processing: ChromaDB runs entirely on your machine
- API key security: Environment variables only
🎛️ Configuration Options
Memory Processing
- Temperature: 0.2 (consistent fact extraction)
- Max tokens: 1500 (detailed memory processing)
- Model: Claude 3.7 Sonnet (high-quality reasoning)
Search Parameters
- Default limit: 3 memories per search
- Similarity threshold: Automatic (ChromaDB optimized)
- Collection scope: Single user (isolated memories)
🚀 Usage Patterns with Qodo AI
Personal Information
"Remember that I work as a software engineer and prefer Python and PowerShell"
"I live in timezone UTC+3"
"My favorite IDE is VS Code"
Project Context
"I'm working on a MCP server project using FastMCP and Mem0"
"The project uses OpenRouter for LLM and ChromaDB for vectors"
"Port 8051 is used for the SSE transport"
Preferences & Settings
"I prefer detailed explanations with code examples"
"Always use PowerShell for Windows automation tasks"
"Format code blocks with syntax highlighting"
🔄 Maintenance
Regular Tasks
- Monitor ChromaDB size (
./chroma_db/) - Check PostgreSQL connection health
- Review memory quality and relevance
- Update API keys as needed
Troubleshooting
- Server won't start: Check
.envconfiguration - Memory not saving: Verify PostgreSQL connection
- Search not working: Restart server to refresh ChromaDB
- Qodo AI can't connect: Confirm port 8051 is open
📈 Performance Optimization
- ChromaDB: Optimized for <1000 memories per collection
- PostgreSQL: Indexed for fast metadata queries
- Memory size: Optimal range 50-500 characters per memory
- Search speed: Sub-100ms for typical queries
🎯 Best Practices
- Memory Quality: Store specific, actionable information
- Regular Cleanup: Remove outdated or irrelevant memories
- Categorization: Use consistent language for similar topics
- Testing: Regularly test memory retrieval accuracy
- Backup: PostgreSQL provides automatic cloud backup
This system transforms Qodo AI into a truly personalized assistant that remembers your preferences, project context, and important information across all conversations.
🐕 Why "PomPom-AI"?
Just like Pompompurin is known for being:
- 🤗 Friendly & Reliable - PomPom-AI is always there to help remember what's important
- 🧠 Smart & Attentive - Intelligently processes and organizes your memories
- 💛 Loyal Companion - Grows smarter about your preferences over time
- 🎯 Focused & Efficient - Quickly finds exactly what you need when you need it
PomPom-AI = PomPom (friendly like Pompompurin) + AI (Artificial Intelligence)
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。