SQLite Project Memory MCP
A graph-friendly relational server that stores project memory, tasks, and metadata in a centralized SQLite database as the authoritative source of truth. It enables AI agents to manage complex project states through entity-relationship modeling and can generate human-readable markdown views on demand.
README
SQLite Project Memory MCP
SQLite-backed MCP server for storing project memory as a graph-friendly relational core.
The server is designed around four rules:
- Everything is an entity.
- Everything can relate to everything.
- State is authoritative.
- Narrative is separate from structure.
Instead of generating and maintaining many parallel documents, the MCP server stores project state in SQLite and exposes tools for safe access. Files such as todo.md or roadmap.md can be generated later on explicit request as views, not treated as the source of truth.
What It Stores
The schema supports project memory such as:
- tasks
- file metadata
- dependencies
- decisions
- roadmap items
- architecture elements
- plans
- notes
- todos
- reasoning records
- snapshots and audit history
Everything is modeled through generic tables:
entitiesattributesrelationshipscontenteventssnapshotssnapshot_entitiestags
The server also creates an FTS5 index for content.body when available.
Key MCP Tools
create_entityupsert_entityupdate_entityget_entitylist_entitiesfind_similar_entitiesresolve_entity_by_nameget_or_create_entityupsert_attributesset_tagsadd_relationshipconnect_entitieslist_relationshipsadd_contentappend_contentsearch_contentcreate_snapshotget_snapshotget_project_overviewget_project_stateget_open_tasksget_decision_logget_architecture_summaryget_recent_reasoningget_dependency_viewget_recent_activityget_database_healthprune_content_retentionget_entity_graphbootstrap_project_memoryrun_read_queryrender_markdown_viewswithuser_requested=trueand a request reasonexport_markdown_viewswithuser_requested=trueand a request reasonserver_info
High-frequency summary tools default to compact=true at the MCP boundary and return an explicit schema envelope for stable machine consumption unless a caller opts out.
get_recent_activity now supports limit, offset, and compact, and get_entity_graph uses explicit node and edge limits so large graph reads stay bounded.
Resources And Prompt
memory://schemamemory://overviewmemory://recent-activityentity://{entity_id}- prompt:
project_memory_policy
Run
Option 1: pip
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install -e .
python -m sqlite_mcp_server
Option 2: uv
uv venv
.\.venv\Scripts\Activate.ps1
uv pip install -e .
python -m sqlite_mcp_server
The default database path is data/project_memory.db under the repository root.
Admin CLI
For local bootstrap and inspection workflows, the package also exposes an admin CLI:
sqlite-project-memory-admin bootstrap-self --repo-root .
sqlite-project-memory-admin project-state
sqlite-project-memory-admin health
sqlite-project-memory-admin export-views --user-requested --request-reason "User asked for a roadmap export" --require-existing-dir exports todo roadmap architecture
sqlite-project-memory-admin export-views --user-requested --request-reason "User asked for refreshed generated docs" --force todo roadmap architecture
sqlite-project-memory-admin sync-document architecture --input-path architecture.md
sqlite-project-memory-admin sync-document decisions --input-path decisions.md
sqlite-project-memory-admin export-json --output-path exports/project_memory.snapshot.json
sqlite-project-memory-admin import-json --input-path exports/project_memory.snapshot.json
This is mainly useful when you want the project to use its own SQLite memory store without writing one-off scripts.
Generated markdown export is locked by default: it refuses to render or write views unless the caller explicitly marks the request as user-requested and supplies a reason. It also refuses to overwrite existing view files unless --force is provided, and --require-existing-dir can be used when automation should fail instead of creating a new output directory.
Sample MCP Config
A repo-local sample MCP client configuration is available at .vscode/mcp.sample.json. Adjust the Python path if needed for another machine.
Configuration
Environment variables:
SQLITE_MCP_DB_PATH: override the SQLite database file path.SQLITE_MCP_TRANSPORT:stdioorstreamable-http.SQLITE_MCP_EXPORT_DIR: default output directory for generated markdown views.SQLITE_MCP_LOG_LEVEL: log level for server lifecycle and tool request logs. Defaults toINFO.SQLITE_MCP_LOG_FORMAT:jsonortextfor stderr logs. Defaults tojson.
Example:
$env:SQLITE_MCP_DB_PATH = "D:\memory\project.db"
$env:SQLITE_MCP_TRANSPORT = "stdio"
python -m sqlite_mcp_server
Design Notes
- Entity ids, relationship ids, tags, types, and attribute keys are validated.
- Duplicate entities are prevented by primary key.
- Duplicate edges are prevented by a unique constraint on
(from_entity, to_entity, type). - Narrative content is stored separately from authoritative state.
- Mutating operations record audit events.
- Raw arbitrary SQL write access is intentionally not exposed through MCP tools.
- A constrained read-only SQL tool is available for diagnostics and ad hoc retrieval.
- Markdown files are treated as generated views, not storage.
AI-First Tooling Guidance
If this server is going to be called frequently by an AI, the useful surface is not a single RUN SQL tool. The practical surface is:
bootstrap_project_memoryto initialize a project root and standard memory areas.upsert_entityso the AI can write idempotently instead of guessing whether to create or update.connect_entitiesso repeated graph writes do not produce duplicate edges.append_contentso narrative memory can be added without the AI having to mint content ids every time.get_recent_activityso an AI can resume context quickly after a new session.run_read_queryfor controlled read-only analytics when the built-in tools are not enough.render_markdown_viewsandexport_markdown_viewsonly after the user explicitly asks for a human-readabletodo,roadmap,plan,architecture,decisions, ornotesdocument.
render_markdown_views and export_markdown_views are intentionally locked behind an explicit user-request contract so an AI does not casually generate markdown and then start using those files as a substitute for SQLite.
export_markdown_views also supports explicit overwrite control so generated documents do not silently replace existing files.
For the remaining human-facing documents, sync-document provides a structured migration path into the anchor memory areas for architecture, decisions, plan, and notes. The generated views then combine that synced document content with the structured SQLite state instead of rendering a flat dump.
Roadmap state is different: it is maintained directly through SQLite entities, attributes, relationships, and content. There is no supported roadmap.md import workflow anymore. If an AI needs to change roadmap state, it should use normal MCP write tools such as upsert_entity, append_content, set_tags, and connect_entities, then generate roadmap.md only when a user explicitly asks for that artifact.
The intended pattern is:
- Use explicit domain tools for writes.
- Use summary-first read tools such as
get_project_state,get_open_tasks,get_decision_log,get_architecture_summary,get_recent_reasoning, andget_dependency_viewbefore falling back to lower-level queries.get_recent_activityis also safe for resumptions because it paginates instead of returning an unbounded activity dump. - Use
run_read_queryonly for read-only inspection when the built-in summaries are not enough. - Generate markdown views only when a user explicitly asks for a document, and pass that request through the MCP call.
- Keep SQLite authoritative.
For long-running AI usage, the hygiene tools matter as much as the write tools:
find_similar_entitieshelps avoid creating duplicate memory objects.resolve_entity_by_namelets the AI reuse existing entities when a human-style name is all it has.get_or_create_entitygives the AI a safer name-first workflow with stable id generation.get_database_healthreports duplicate candidates, invalid statuses, low-signal attributes, and retention pressure.prune_content_retentionprovides a controlled cleanup path for high-volumereasoningandlogcontent.
Policy Decisions
The remaining phase 7 modeling decisions are now explicit:
- Canonical entity ids: generated ids use
<entity_type>.<slug>[.<n>]. Project-scoped memory-area anchors may use project-prefixed ids such asproject.sqlite-mcp.roadmap. - Relationship vocabulary: use the built-in relationship set when possible, and use the
custom.namespace for project-specific edges. There is no registry table. - Attribute keys: common unnamespaced keys are reserved for shared fields such as
priority,owner,phase_number,path, andsource. New custom keys should use lowercase dotted namespaces such asmeta.*,source.*,client.*,trace.*, orui.*. - Status vocabulary: common entity types use a shared status vocabulary exposed by the schema, and other entity types may use stable identifier-style statuses when a specialized lifecycle is required.
- Retention:
reasoningandlogcontent are the only default retention-managed content types, with a recommended keep-latest count of20and dry-run-first pruning. - Markdown generation: markdown views are on-demand only and SQLite remains authoritative.
- MCP read defaults: high-frequency read tools default to
compact=trueand callers opt out withcompact=falsewhen they need fuller payloads. - Semantic retrieval: the baseline stays on SQLite FTS5 plus structured read models. Embeddings are intentionally out of scope unless a concrete retrieval gap appears that those mechanisms cannot cover.
These policy decisions are also exposed programmatically through schema_overview() / memory://schema and checked in get_database_health() where appropriate.
Suggested Modeling Conventions
- Use stable ids such as
task.auth-flow,file.src.server,decision.schema-graph-core. - Keep
typebroad and durable:task,file,module,decision,feature,plan,note. - Put volatile metadata in
attributes, not in new tables. - Use
content_typeto distinguishnote,spec,analysis,reasoning,log. - Use relationships deliberately:
depends_on,implements,blocks,calls,owns.
Quick start scripts
For a one-command local setup from an empty repo root on Windows, run:
PowerShell:
.\install.ps1
CMD/Bash:
install.bat
These scripts perform:
git init(if needed)python -m venv .venv- Activate
.venv pip install -e .sqlite-project-memory-admin bootstrap-self --repo-root .sqlite-project-memory-admin project-statesqlite-project-memory-admin health
Then start server with:
python -m sqlite_mcp_server
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。