
Graphiti MCP Pro
An enhanced memory repository MCP service that builds and queries temporally-aware knowledge graphs from user interactions and data. Features asynchronous parallel processing, task management, broader AI model compatibility, and a comprehensive visual management interface.
README
Graphiti MCP Pro
English | 中文
About Graphiti
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
This project is an enhanced memory repository MCP service and management platform based on Graphiti. Compared to the original project's MCP service, it offers the following core advantages: enhanced core capabilities, broader AI model compatibility, and comprehensive visual management interface.
Features
Enhanced Core Capabilities
Asynchronous Parallel Processing
Adding memories is the core functionality of the MCP service. We have introduced an asynchronous parallel processing mechanism based on the original implementation. The same group ID (such as different development projects) can execute up to 5 adding memory tasks in parallel, significantly improving processing efficiency.
Task Management Tools
Four new MCP tools have been added for managing add_memory
tasks:
list_add_memory_tasks
- List alladd_memory
tasksget_add_memory_task_status
- Getadd_memory
task statuswait_for_add_memory_task
- Wait foradd_memory
task completioncancel_add_memory_task
- Canceladd_memory
task
Unified Configuration Management
Optimized configuration management to resolve inconsistencies between command-line parameters, environment variables, and management backend database configurations.
[!NOTE] When the management backend is enabled, MCP service parameters in the .env environment configuration file only take effect during the initial startup. Subsequent configurations will be based on parameters in the management backend database.
Broader AI Model Compatibility and Flexibility
Enhanced Model Compatibility
Through integration with the instructor library, model compatibility has been significantly improved. Now supports various models such as DeepSeek, Qwen, and even locally run models through Ollama, vLLM, as long as they provide OpenAI API compatible interfaces.
Separated Model Configuration
The original unified LLM configuration has been split into three independent configurations, allowing flexible combinations based on actual needs:
- Large Model (LLM): Responsible for entity and relationship extraction
- Small Model (Small LLM): Handles entity attribute summarization, relationship deduplication, reranking, and other lightweight tasks
- Embedding Model (Embedder): Dedicated to text vectorization
[!NOTE] When configuring the embedding model, note that its API path differs from the two LLMs above. LLMs use the chat completion path
{base_url}/chat/completions
, while text embedding uses{base_url}/embeddings
. If you select "Same as Large Model" in the management backend, ensure your configured large model supports text embedding.Additionally, if you run the service via docker compose while the LLM or embedding model is running locally, the base_url needs to be configured as
http://host.docker.internal:{port}
, where the port should be adjusted according to your local running port.
Comprehensive Management Platform
To provide better user experience and observability, we have developed a complete management backend and Web UI. Through the management interface, you can:
- Service Control: Start, stop, restart MCP service
- Configuration Management: Real-time configuration updates and adjustments
- Usage Monitoring: View detailed token usage statistics
- Log Viewing: Real-time and historical log queries
Getting Started
Running with Docker Compose (Recommended)
-
Clone Project
git clone http://github.com/itcook/graphiti-mcp-pro # or git clone git@github.com:itcook/graphiti-mcp-pro.git cd graphiti-mcp-pro
-
Configure Environment Variables (Optional)
# Copy example configuration file mv .env.example.en .env # Edit .env file according to the instructions
-
Start Services
docker compose up -d
[!TIP]
If the project has updates and you need to rebuild the image, use
docker compose up -d --build
.Rest assured, data will be persistently saved in the external database and will not be lost.
- Access Management Interface Default address: http://localhost:6062
Manual Installation
[!NOTE] Prerequisites:
- Python 3.10+ and uv project manager
- Node.js 20+
- Accessible Neo4j 5.26+ database service
- AI model service
-
Clone Project
git clone http://github.com/itcook/graphiti-mcp-pro # or git clone git@github.com:itcook/graphiti-mcp-pro.git cd graphiti-mcp-pro
-
Install Dependencies
uv sync
-
Configure Environment Variables
# Copy example configuration file mv .env.example.en .env # Edit .env file according to the instructions
-
Run MCP Service
# Run service with management backend uv run main.py -m # Or run MCP service only # uv run main.py
-
Build and Run Management Frontend
Enter frontend directory and install dependencies:
cd manager/frontend pnpm install # or npm install / yarn
Build and run frontend:
pnpm run build # or npm run build / yarn build pnpm run preview # or npm run preview / yarn preview
Access management interface: http://localhost:6062
Important Notes
Known Limitations
- 🔒 Security Notice: The management backend does not implement authorization access mechanisms. DO NOT expose the service on public servers.
- 🧪 Test Coverage: Due to resource constraints, the project has not been thoroughly tested. Recommended for personal use only.
- 📡 Transport Protocol: Only supports streamable-http transport protocol. Removed stdio and sse support from the original project.
- ⚙️ Code Optimization: Some architectural designs (dependency injection, exception handling, client decoupling, etc.) still have room for optimization.
Usage Recommendations
- Configuration Instructions: Please carefully read the setup instructions and comments in
.env.example.en
- Model Selection: If using natively supported models like GPT/Gemini/Claude and don't need detailed runtime information, consider using the original Graphiti MCP
- Issue Feedback: Welcome to submit Issues or Pull Requests for any usage problems
Developed with assistance from 🤖 Augment Code
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。