DevFlow MCP
Provides AI agents with persistent, searchable memory using a knowledge graph stored in SQLite. Features semantic search, temporal awareness, and workflow-aware prompts for development projects.
README
DevFlow MCP: Smart Memory for AI Agents
Ever wished your AI could remember things between conversations? DevFlow MCP gives any AI that supports the Model Context Protocol (like Claude Desktop) a persistent, searchable memory that actually makes sense.
Think of it as giving your AI a brain that doesn't reset every time you start a new chat.
What Makes This Different?
Most AI memory systems are either too complex to set up or too simple to be useful. DevFlow MCP hits the sweet spot:
Actually Works Out of the Box: No Docker containers, no external databases to configure. Just install and run.
Built for Real Development: Created by developers who got tired of explaining the same context over and over to AI assistants. This system understands how software projects actually work.
Honest About What It Does: Every feature documented here actually exists and works. No promises about features "coming soon" or half-implemented APIs.
Type-Safe Throughout: Zero any types in the entire codebase. If TypeScript is happy, the code works.
The Story Behind This Project
This started as a simple problem: AI assistants kept forgetting important project context between sessions. Existing solutions were either enterprise-grade overkill or toy projects that couldn't handle real workloads.
So we built something that actually solves the problem. DevFlow MCP has been battle-tested on real projects, handling everything from quick prototypes to complex enterprise applications.
Core Concepts
Entities
Entities are the primary nodes in the knowledge graph. Each entity has:
- A unique name (identifier)
- An entity type (e.g., "person", "organization", "event")
- A list of observations
- Vector embeddings (for semantic search)
- Complete version history
Example:
{
"name": "John_Smith",
"entityType": "person",
"observations": ["Speaks fluent Spanish"]
}
Relations
Relations define directed connections between entities with enhanced properties:
- Strength indicators (0.0-1.0)
- Confidence levels (0.0-1.0)
- Rich metadata (source, timestamps, tags)
- Temporal awareness with version history
- Time-based confidence decay
Example:
{
"from": "John_Smith",
"to": "Anthropic",
"relationType": "works_at",
"strength": 0.9,
"confidence": 0.95,
"metadata": {
"source": "linkedin_profile",
"last_verified": "2025-03-21"
}
}
Prompts (Workflow Guidance)
DevFlow MCP includes workflow-aware prompts that teach AI agents how to use the knowledge graph effectively in a cascading development workflow (planner → task creator → coder → reviewer).
What are prompts? Prompts are instructional messages that guide AI agents on which tools to call and when. They appear as slash commands in Claude Desktop (e.g., /init-project) and provide context-aware documentation.
Important: Prompts don't save data themselves—they return guidance text that tells the AI which tools to call. The AI then calls those tools (like create_entities, semantic_search) which actually interact with the database.
Available Prompts
1. /init-project - Start New Projects
Guides planners on creating initial feature entities and structuring planning information.
Arguments:
projectName(required): Name of the project or featuredescription(required): High-level descriptiongoals(optional): Specific goals or requirements
What it teaches:
- How to create "feature" entities for high-level projects
- How to document decisions early
- How to plan tasks and link them to features
- Best practices for structuring project information
Example usage in Claude Desktop:
/init-project projectName="UserAuthentication" description="Implement secure user login system" goals="Support OAuth, 2FA, and password reset"
2. /get-context - Retrieve Relevant Information
Helps any agent search the knowledge graph for relevant history, dependencies, and context before starting work.
Arguments:
query(required): What are you working on? (used for semantic search)entityTypes(optional): Filter by types (feature, task, decision, component, test)includeHistory(optional): Include version history (default: false)
What it teaches:
- How to use semantic search to find related work
- How to check dependencies via relations
- How to review design decisions
- How to understand entity version history
Example usage:
/get-context query="authentication implementation" entityTypes=["component","decision"] includeHistory=true
3. /remember-work - Store Completed Work
Guides agents on saving their work with appropriate entity types and relations.
Arguments:
workType(required): Type of work (feature, task, decision, component, test)name(required): Name/title of the workdescription(required): What did you do? (stored as observations)implementsTask(optional): Task this work implements (creates "implements" relation)partOfFeature(optional): Feature this is part of (creates "part_of" relation)dependsOn(optional): Components this depends on (creates "depends_on" relations)keyDecisions(optional): Important decisions made
What it teaches:
- How to create entities with correct types
- How to set up relations between entities
- How to document decisions separately
- How to maintain the knowledge graph structure
Example usage:
/remember-work workType="component" name="AuthService" description="Implemented OAuth login flow with JWT tokens" implementsTask="UserAuth" partOfFeature="Authentication" dependsOn=["TokenManager","UserDB"]
4. /review-context - Get Full Review Context
Helps reviewers gather all relevant information about a piece of work before providing feedback.
Arguments:
entityName(required): Name of the entity to reviewincludeRelated(optional): Include related entities (default: true)includeDecisions(optional): Include decision history (default: true)
What it teaches:
- How to get the entity being reviewed
- How to find related work (dependencies, implementations)
- How to review design decisions
- How to check test coverage
- How to add review feedback as observations
Example usage:
/review-context entityName="AuthService" includeRelated=true includeDecisions=true
Cascading Workflow Example
Here's how prompts guide a complete development workflow:
1. Planner Agent:
/init-project projectName="UserDashboard" description="Create user analytics dashboard"
# AI learns to create feature entity, plan tasks
2. Task Creator Agent:
/get-context query="dashboard features"
# AI learns to search for related work, then creates task entities
3. Developer Agent:
/get-context query="dashboard UI components"
# AI learns to find relevant components and decisions
/remember-work workType="component" name="DashboardWidget" description="Created widget framework"
# AI learns to store work with proper relations
4. Reviewer Agent:
/review-context entityName="DashboardWidget"
# AI learns to get full context, check tests, add feedback
Why Prompts Matter
- Consistency: All agents use the same structured approach
- Context preservation: Work is stored with proper metadata and relations
- Discoverability: Future agents can find relevant history via semantic search
- Workflow awareness: Each prompt knows its place in the development cycle
- Self-documenting: Prompts teach agents best practices
How It Works Under the Hood
DevFlow MCP stores everything in a single SQLite database file. Yes, really - just one file on your computer.
Why SQLite Instead of Something "Fancier"?
We tried the complex stuff first. External databases, Docker containers, cloud services - they all work, but they're overkill for what most developers actually need.
SQLite gives you:
- One file to rule them all: Your entire knowledge graph lives in a single
.dbfile you can copy, backup, or version control - No setup headaches: No servers to configure, no containers to manage, no cloud accounts to create
- Surprisingly fast: SQLite handles millions of records without breaking a sweat
- Vector search built-in: The sqlite-vec extension handles semantic search natively
- Works everywhere: From your laptop to production servers, SQLite just works
Getting Started (It's Ridiculously Simple)
# Install globally
npm install -g devflow-mcp
# Run it (creates database automatically)
dfm mcp
# Want to use a specific file? Set the location
DFM_SQLITE_LOCATION=./my-project-memory.db dfm mcp
No configuration files. No environment setup. No "getting started" tutorials that take 3 hours. It just works.
Requirements: Node.js 23+ (for the latest SQLite features)
Advanced Features
Semantic Search
Find semantically related entities based on meaning rather than just keywords:
- Vector Embeddings: Entities are automatically encoded into high-dimensional vector space using OpenAI's embedding models
- Cosine Similarity: Find related concepts even when they use different terminology
- Configurable Thresholds: Set minimum similarity scores to control result relevance
- Cross-Modal Search: Query with text to find relevant entities regardless of how they were described
- Multi-Model Support: Compatible with multiple embedding models (OpenAI text-embedding-3-small/large)
- Contextual Retrieval: Retrieve information based on semantic meaning rather than exact keyword matches
- Optimized Defaults: Tuned parameters for balance between precision and recall (0.6 similarity threshold, hybrid search enabled)
- Hybrid Search: Combines semantic and keyword search for more comprehensive results
- Adaptive Search: System intelligently chooses between vector-only, keyword-only, or hybrid search based on query characteristics and available data
- Performance Optimization: Prioritizes vector search for semantic understanding while maintaining fallback mechanisms for resilience
- Query-Aware Processing: Adjusts search strategy based on query complexity and available entity embeddings
Temporal Awareness
Track complete history of entities and relations with point-in-time graph retrieval:
- Full Version History: Every change to an entity or relation is preserved with timestamps
- Point-in-Time Queries: Retrieve the exact state of the knowledge graph at any moment in the past
- Change Tracking: Automatically records createdAt, updatedAt, validFrom, and validTo timestamps
- Temporal Consistency: Maintain a historically accurate view of how knowledge evolved
- Non-Destructive Updates: Updates create new versions rather than overwriting existing data
- Time-Based Filtering: Filter graph elements based on temporal criteria
- History Exploration: Investigate how specific information changed over time
Confidence Decay
Relations automatically decay in confidence over time based on configurable half-life:
- Time-Based Decay: Confidence in relations naturally decreases over time if not reinforced
- Configurable Half-Life: Define how quickly information becomes less certain (default: 30 days)
- Minimum Confidence Floors: Set thresholds to prevent over-decay of important information
- Decay Metadata: Each relation includes detailed decay calculation information
- Non-Destructive: Original confidence values are preserved alongside decayed values
- Reinforcement Learning: Relations regain confidence when reinforced by new observations
- Reference Time Flexibility: Calculate decay based on arbitrary reference times for historical analysis
Advanced Metadata
Rich metadata support for both entities and relations with custom fields:
- Source Tracking: Record where information originated (user input, analysis, external sources)
- Confidence Levels: Assign confidence scores (0.0-1.0) to relations based on certainty
- Relation Strength: Indicate importance or strength of relationships (0.0-1.0)
- Temporal Metadata: Track when information was added, modified, or verified
- Custom Tags: Add arbitrary tags for classification and filtering
- Structured Data: Store complex structured data within metadata fields
- Query Support: Search and filter based on metadata properties
- Extensible Schema: Add custom fields as needed without modifying the core data model
MCP API Tools
The following tools are available to LLM client hosts through the Model Context Protocol:
Entity Management
-
create_entities
- Create multiple new entities in the knowledge graph
- Input:
entities(array of objects)- Each object contains:
name(string): Entity identifierentityType(string): Type classificationobservations(string[]): Associated observations
- Each object contains:
-
add_observations
- Add new observations to existing entities
- Input:
observations(array of objects)- Each object contains:
entityName(string): Target entitycontents(string[]): New observations to add
- Each object contains:
- Note: Unlike relations, observations do not support strength, confidence, or metadata fields. Observations are atomic facts about entities.
-
delete_entities
- Remove entities and their relations
- Input:
entityNames(string[])
-
delete_observations
- Remove specific observations from entities
- Input:
deletions(array of objects)- Each object contains:
entityName(string): Target entityobservations(string[]): Observations to remove
- Each object contains:
Relation Management
-
create_relations
- Create multiple new relations between entities with enhanced properties
- Input:
relations(array of objects)- Each object contains:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship typestrength(number, optional): Relation strength (0.0-1.0)confidence(number, optional): Confidence level (0.0-1.0)metadata(object, optional): Custom metadata fields
- Each object contains:
-
get_relation
- Get a specific relation with its enhanced properties
- Input:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship type
-
update_relation
- Update an existing relation with enhanced properties
- Input:
relation(object):- Contains:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship typestrength(number, optional): Relation strength (0.0-1.0)confidence(number, optional): Confidence level (0.0-1.0)metadata(object, optional): Custom metadata fields
- Contains:
-
delete_relations
- Remove specific relations from the graph
- Input:
relations(array of objects)- Each object contains:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship type
- Each object contains:
Graph Operations
-
read_graph
- Read the entire knowledge graph
- No input required
-
search_nodes
- Search for nodes based on query
- Input:
query(string)
-
open_nodes
- Retrieve specific nodes by name
- Input:
names(string[])
Semantic Search
-
semantic_search
- Search for entities semantically using vector embeddings and similarity
- Input:
query(string): The text query to search for semanticallylimit(number, optional): Maximum results to return (default: 10)min_similarity(number, optional): Minimum similarity threshold (0.0-1.0, default: 0.6)entity_types(string[], optional): Filter results by entity typeshybrid_search(boolean, optional): Combine keyword and semantic search (default: true)semantic_weight(number, optional): Weight of semantic results in hybrid search (0.0-1.0, default: 0.6)
- Features:
- Intelligently selects optimal search method (vector, keyword, or hybrid) based on query context
- Gracefully handles queries with no semantic matches through fallback mechanisms
- Maintains high performance with automatic optimization decisions
-
get_entity_embedding
- Get the vector embedding for a specific entity
- Input:
entity_name(string): The name of the entity to get the embedding for
Temporal Features
-
get_entity_history
- Get complete version history of an entity
- Input:
entityName(string)
-
get_relation_history
- Get complete version history of a relation
- Input:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship type
-
get_graph_at_time
- Get the state of the graph at a specific timestamp
- Input:
timestamp(number): Unix timestamp (milliseconds since epoch)
-
get_decayed_graph
- Get graph with time-decayed confidence values
- Input:
options(object, optional):reference_time(number): Reference timestamp for decay calculation (milliseconds since epoch)decay_factor(number): Optional decay factor override
Configuration
Environment Variables
Configure DevFlow MCP with these environment variables:
# SQLite Configuration
DFM_SQLITE_LOCATION=./knowledge.db
# Embedding Service Configuration
OPENAI_API_KEY=your-openai-api-key
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# Debug Settings
DEBUG=true
Embedding Models
Available OpenAI embedding models:
text-embedding-3-small: Efficient, cost-effective (1536 dimensions)text-embedding-3-large: Higher accuracy, more expensive (3072 dimensions)text-embedding-ada-002: Legacy model (1536 dimensions)
OpenAI API Configuration
To use semantic search, you'll need to configure OpenAI API credentials:
- Obtain an API key from OpenAI
- Configure your environment with:
# OpenAI API Key for embeddings
OPENAI_API_KEY=your-openai-api-key
# Default embedding model
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
Note: For testing environments, the system will mock embedding generation if no API key is provided. However, using real embeddings is recommended for integration testing.
Integration with Claude Desktop
Configuration
For local development, add this to your claude_desktop_config.json:
{
"mcpServers": {
"devflow": {
"command": "dfm",
"args": ["mcp"],
"env": {
"DFM_SQLITE_LOCATION": "./knowledge.db",
"OPENAI_API_KEY": "your-openai-api-key",
"OPENAI_EMBEDDING_MODEL": "text-embedding-3-small",
"DEBUG": "true"
}
}
}
}
Important: Always explicitly specify the embedding model in your Claude Desktop configuration to ensure consistent behavior.
Recommended System Prompts
For optimal integration with Claude, add these statements to your system prompt:
You have access to the DevFlow MCP knowledge graph memory system, which provides you with persistent memory capabilities.
Your memory tools are provided by DevFlow MCP, a sophisticated knowledge graph implementation.
When asked about past conversations or user information, always check the DevFlow MCP knowledge graph first.
You should use semantic_search to find relevant information in your memory when answering questions.
Testing Semantic Search
Once configured, Claude can access the semantic search capabilities through natural language:
-
To create entities with semantic embeddings:
User: "Remember that Python is a high-level programming language known for its readability and JavaScript is primarily used for web development." -
To search semantically:
User: "What programming languages do you know about that are good for web development?" -
To retrieve specific information:
User: "Tell me everything you know about Python."
The power of this approach is that users can interact naturally, while the LLM handles the complexity of selecting and using the appropriate memory tools.
Real-World Applications
DevFlow MCP's adaptive search capabilities provide practical benefits:
-
Query Versatility: Users don't need to worry about how to phrase questions - the system adapts to different query types automatically
-
Failure Resilience: Even when semantic matches aren't available, the system can fall back to alternative methods without user intervention
-
Performance Efficiency: By intelligently selecting the optimal search method, the system balances performance and relevance for each query
-
Improved Context Retrieval: LLM conversations benefit from better context retrieval as the system can find relevant information across complex knowledge graphs
For example, when a user asks "What do you know about machine learning?", the system can retrieve conceptually related entities even if they don't explicitly mention "machine learning" - perhaps entities about neural networks, data science, or specific algorithms. But if semantic search yields insufficient results, the system automatically adjusts its approach to ensure useful information is still returned.
Troubleshooting
Vector Search Diagnostics
DevFlow MCP includes built-in diagnostic capabilities to help troubleshoot vector search issues:
- Embedding Verification: The system checks if entities have valid embeddings and automatically generates them if missing
- Vector Index Status: Verifies that the vector index exists and is in the ONLINE state
- Fallback Search: If vector search fails, the system falls back to text-based search
- Detailed Logging: Comprehensive logging of vector search operations for troubleshooting
Debug Tools (when DEBUG=true)
Additional diagnostic tools become available when debug mode is enabled:
- diagnose_vector_search: Information about the SQLite vector index, embedding counts, and search functionality
- force_generate_embedding: Forces the generation of an embedding for a specific entity
- debug_embedding_config: Information about the current embedding service configuration
Developer Reset
To completely reset your SQLite database during development:
# Remove the database file
rm -f ./knowledge.db
# Or if using a custom location
rm -f $DFM_SQLITE_LOCATION
# Restart your application - schema will be recreated automatically
dfm mcp
Building and Development
# Clone the repository
git clone https://github.com/takinprofit/dev-flow-mcp.git
cd dev-flow-mcp
# Install dependencies (uses pnpm, not npm)
pnpm install
# Build the project
pnpm run build
# Run tests
pnpm test
# Check test coverage
pnpm run test:coverage
# Type checking
npx tsc --noEmit
# Linting
npx ultracite check src/
npx ultracite fix src/
Installation
Local Development
For development or contributing to the project:
# Clone the repository
git clone https://github.com/takinprofit/dev-flow-mcp.git
cd dev-flow-mcp
# Install dependencies
pnpm install
# Build the CLI
pnpm run build
# The CLI will be available as 'dfm' command
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。