
RAG Memory MCP
An advanced MCP server providing RAG-enabled memory through a knowledge graph with vector search capabilities, enabling intelligent information storage, semantic retrieval, and document processing.
README
rag-memory-mcp
An advanced MCP server for RAG-enabled memory through a knowledge graph with vector search capabilities. This server extends the basic memory concepts with semantic search, document processing, and hybrid retrieval for more intelligent memory management.
Inspired by: Knowledge Graph Memory Server from the Model Context Protocol project.
Note: This server is designed to run locally alongside MCP clients (e.g., Claude Desktop, VS Code) and requires local file system access for database storage.
✨ Key Features
- 🧠 Knowledge Graph Memory: Persistent entities, relationships, and observations
- 🔍 Vector Search: Semantic similarity search using sentence transformers
- 📄 Document Processing: RAG-enabled document chunking and embedding
- 🔗 Hybrid Search: Combines vector similarity with graph traversal
- ⚡ SQLite Backend: Fast local storage with sqlite-vec for vector operations
- 🎯 Entity Extraction: Automatic term extraction from documents
Tools
This server provides comprehensive memory management through the Model Context Protocol (MCP):
📚 Document Management
storeDocument
: Store documents with metadata for processingchunkDocument
: Create text chunks with configurable parametersembedChunks
: Generate vector embeddings for semantic searchextractTerms
: Extract potential entity terms from documentslinkEntitiesToDocument
: Create explicit entity-document associationsdeleteDocuments
: Remove documents and associated datalistDocuments
: View all stored documents with metadata
🧠 Knowledge Graph
createEntities
: Create new entities with observations and typescreateRelations
: Establish relationships between entitiesaddObservations
: Add contextual information to existing entitiesdeleteEntities
: Remove entities and their relationshipsdeleteRelations
: Remove specific relationshipsdeleteObservations
: Remove specific observations from entities
🔍 Search & Retrieval
hybridSearch
: Advanced search combining vector similarity and graph traversalsearchNodes
: Find entities by name, type, or observation contentopenNodes
: Retrieve specific entities and their relationshipsreadGraph
: Get complete knowledge graph structure
📊 Analytics
getKnowledgeGraphStats
: Comprehensive statistics about the knowledge base
Usage Scenarios
This server is ideal for scenarios requiring intelligent memory and document understanding:
- Research and Documentation: Store, process, and intelligently retrieve research papers
- Knowledge Base Construction: Build interconnected knowledge from documents
- Conversational Memory: Remember context across chat sessions with semantic understanding
- Content Analysis: Extract and relate concepts from large document collections
- Intelligent Assistance: Provide contextually aware responses based on stored knowledge
Client Configuration
This section explains how to configure MCP clients to use the rag-memory-mcp
server.
Usage with Claude Desktop / Cursor
Add the following configuration to your claude_desktop_config.json
(Claude Desktop) or mcp.json
(Cursor):
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"]
}
}
}
With specific version:
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp@1.0.0"]
}
}
}
With custom database path:
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"],
"env": {
"MEMORY_DB_PATH": "/path/to/custom/memory.db"
}
}
}
}
Usage with VS Code
Add the following configuration to your User Settings (JSON) file or .vscode/mcp.json
:
{
"mcp": {
"servers": {
"rag-memory-mcp": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"]
}
}
}
}
Core Concepts
Entities
Entities are the primary nodes in the knowledge graph. Each entity has:
- A unique name (identifier)
- An entity type (e.g., "PERSON", "CONCEPT", "TECHNOLOGY")
- A list of observations (contextual information)
Example:
{
"name": "Machine Learning",
"entityType": "CONCEPT",
"observations": [
"Subset of artificial intelligence",
"Focuses on learning from data",
"Used in recommendation systems"
]
}
Relations
Relations define directed connections between entities, describing how they interact:
Example:
{
"from": "React",
"to": "JavaScript",
"relationType": "BUILT_WITH"
}
Observations
Observations are discrete pieces of information about entities:
- Stored as strings
- Attached to specific entities
- Can be added or removed independently
- Should be atomic (one fact per observation)
Documents & Vector Search
Documents are processed through:
- Storage: Raw text with metadata
- Chunking: Split into manageable pieces
- Embedding: Convert to vector representations
- Linking: Associate with relevant entities
This enables hybrid search that combines:
- Vector similarity (semantic matching)
- Graph traversal (conceptual relationships)
Environment Variables
MEMORY_DB_PATH
: Path to the SQLite database file (default:memory.db
in the server directory)
Development Setup
This section is for developers looking to modify or contribute to the server.
Prerequisites
- Node.js: Check
package.json
for version compatibility - npm: Used for package management
Installation (Developers)
- Clone the repository:
git clone https://github.com/ttommyth/rag-memory-mcp.git
cd rag-memory-mcp
- Install dependencies:
npm install
Building
npm run build
Running (Development)
npm run watch # For development with auto-rebuild
Development Commands
- Build:
npm run build
- Watch:
npm run watch
- Prepare:
npm run prepare
Usage Example
Here's a typical workflow for building and querying a knowledge base:
// 1. Store a document
await storeDocument({
id: "ml_intro",
content: "Machine learning is a subset of AI...",
metadata: { type: "educational", topic: "ML" }
});
// 2. Process the document
await chunkDocument({ documentId: "ml_intro" });
await embedChunks({ documentId: "ml_intro" });
// 3. Extract and create entities
const terms = await extractTerms({ documentId: "ml_intro" });
await createEntities({
entities: [
{
name: "Machine Learning",
entityType: "CONCEPT",
observations: ["Subset of artificial intelligence", "Learns from data"]
}
]
});
// 4. Search with hybrid approach
const results = await hybridSearch({
query: "artificial intelligence applications",
limit: 10,
useGraph: true
});
System Prompt Suggestions
For optimal memory utilization, consider using this system prompt:
You have access to a RAG-enabled memory system with knowledge graph capabilities. Follow these guidelines:
1. **Information Storage**:
- Store important documents using the document management tools
- Create entities for people, concepts, organizations, and technologies
- Build relationships between related concepts
2. **Information Retrieval**:
- Use hybrid search for comprehensive information retrieval
- Leverage both semantic similarity and graph relationships
- Search entities before creating duplicates
3. **Memory Maintenance**:
- Add observations to enrich entity context
- Link documents to relevant entities for better discoverability
- Use statistics to monitor knowledge base growth
4. **Processing Workflow**:
- Store → Chunk → Embed → Extract → Link
- Always process documents completely for best search results
Contributing
Contributions are welcome! Please follow standard development practices and ensure all tests pass before submitting pull requests.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Built with: TypeScript, SQLite, sqlite-vec, Hugging Face Transformers, Model Context Protocol SDK
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。