Grimoire
A local security knowledge base that indexes documentation like CVEs and CWEs using hybrid keyword and semantic search. It enables LLM agents to query indexed materials via MCP for accurate, offline retrieval during security audits and code reviews.
README
<div align="center"> <img src="logo.svg" width="96" height="96" alt="Grimoire logo" /> <h1>Grimoire</h1> <p><strong>Your LLM re-reads the same reference docs every conversation. Grimoire indexes them once.</strong></p> <p> <a href="https://tannner.com">tannner.com</a> · <a href="https://github.com/tannernicol/grimoire">GitHub</a> </p>
<p align="center"> <img src="docs/demo.png" alt="Grimoire search demo" width="700" /> </p>
The Problem
Your LLM agent needs to reference CWE-89 during a code review. Without Grimoire, it either hallucinates the details, or you paste 50 pages of NIST docs into the context window and hope it finds the right paragraph. Every conversation. Every time.
The Solution
Grimoire indexes security reference material once — CVEs, CWEs, OWASP, audit findings, your internal standards — into a single SQLite file with both FTS5 keyword search and semantic embeddings. Your LLM agent searches it mid-conversation via MCP. Exact matches when you need "CWE-89". Conceptual recall when you need "authentication bypass techniques". Both in one query.
One SQLite file. Zero cloud. Instant retrieval via MCP.
+------------------+
| Data Sources |
| CVE MD CSV .. |
+--------+---------+
|
ingest()
|
+--------v---------+
| SQLite DB |
| +------------+ |
| | documents | |
| +------------+ |
| | docs_fts5 | | <-- FTS5 keyword index
| +------------+ |
| | embeddings | | <-- semantic vectors
| +------------+ |
+--------+---------+
|
+--------v---------+
| Search Engine |
| |
| keyword (BM25) |
| semantic (cos) |
| hybrid (both) |
+--------+---------+
|
+-------------+-------------+
| |
+------v------+ +--------v--------+
| Python API | | MCP Server |
| | | |
| Grimoire() | | grimoire_search |
| .search() | | grimoire_status |
| .add_doc() | | grimoire_quality|
+-------------+ +-----------------+
Quick Start
git clone https://github.com/tannernicol/grimoire.git
cd grimoire
pip install -e .
# Fetch and index real security data (NVD CVEs + CWE catalog + OWASP Top 10)
python scripts/fetch_sources.py all
# Search
python examples/search_demo.py "SQL injection"
python examples/search_demo.py "access control" --severity critical
python examples/search_demo.py --status
Auto-Fetch Security Data
Grimoire fetches from reputable public sources — no manual downloads:
# Everything: NVD + CWE + OWASP
python scripts/fetch_sources.py all
# Recent CVEs from NIST NVD (last 90 days, critical only)
python scripts/fetch_sources.py nvd --days 90 --severity CRITICAL
# Full CWE catalog from MITRE
python scripts/fetch_sources.py cwe
# With embeddings for semantic search (requires Ollama)
python scripts/fetch_sources.py all --embeddings
Enable Semantic Search
Requires Ollama with nomic-embed-text:
ollama pull nomic-embed-text
python scripts/fetch_sources.py all --embeddings
python examples/search_demo.py "authentication bypass" --mode hybrid
Why Not Just Use RAG?
Most RAG setups do one thing: chunk documents, embed them, vector search. That works until you need an exact CVE number, a specific NIST control ID, or a CWE by name. Vector search alone misses exact matches.
Grimoire runs both:
- FTS5 (BM25) for keyword precision — finds "CWE-89" when you search "CWE-89"
- Semantic embeddings (cosine similarity) for conceptual recall — finds SQL injection variants when you search "database manipulation"
- Hybrid mode combines both with configurable weighting (default 40/60 keyword/semantic)
Everything lives in a single SQLite file. No Postgres, no Pinecone, no cloud anything.
Security Knowledge Retrieval
Resume reviewers click through looking for real security signal. Grimoire makes it obvious:
- Auto-ingest CVEs, CWEs, OWASP, and your own Markdown findings with a single command.
- Store structured metadata (severity, categories, tags) so you can filter for "critical auth bypass" or "RCE" instantly.
- Serve the exact same SQLite file over MCP so any LLM agent can cite sources in the middle of a conversation.
python scripts/fetch_sources.py nvd --days 30 --severity CRITICAL
python scripts/fetch_sources.py cwe --embeddings
python examples/search_demo.py "JWT kid bypass" --mode hybrid --limit 5
The search demo will show hybrid BM25 + cosine hits with severity labels, while
the MCP server (pip install -e '.[mcp]') exposes identical results to your
agents without copying a single document into the prompt.
Python API
from grimoire.core import Grimoire
g = Grimoire("security_kb.db")
# Add documents
g.add_document(
source="advisory",
title="CVE-2024-1234",
content="Buffer overflow in example library allows RCE via crafted input...",
severity="critical",
categories=["buffer-overflow", "RCE"],
)
# Search
results = g.search("buffer overflow", mode="hybrid", limit=10)
for r in results:
print(f"[{r.score:.3f}] {r.title} ({r.severity})")
# Check index health
status = g.index_status()
health = g.health_check()
Ingest Anything
Built-in ingestors for common security data formats:
# CVE/NVD feeds (API 2.0, 1.1, or JSON array)
from grimoire.ingest.cve import CVEIngestor
CVEIngestor().ingest_to_grimoire(g, "cve_data.json")
# Markdown files (recursively scan directories)
from grimoire.ingest.markdown import MarkdownIngestor
MarkdownIngestor(source_label="audit-findings").ingest_to_grimoire(g, "findings/")
# CSV with column mapping
from grimoire.ingest.csv import CSVIngestor
CSVIngestor(
source_label="vuln-db",
column_map={"vuln_name": "title", "details": "content"},
).ingest_to_grimoire(g, "vulns.csv")
Add your own by subclassing BaseIngestor:
from grimoire.ingest.base import BaseIngestor
class MyIngestor(BaseIngestor):
source_name = "my-source"
def ingest(self, path):
for item in read_my_data(path):
yield {
"source": self.source_name,
"title": item["name"],
"content": item["description"],
"severity": item.get("severity"),
"categories": item.get("tags"),
}
MCP Integration
Grimoire ships an MCP server so LLM agents can search your knowledge base mid-conversation.
Note: The MCP server is an optional dependency. Install it with:
pip install -e ".[mcp]"
# Start the server
grimoire-mcp --db security_kb.db
Add to Claude Code or Claude Desktop:
{
"mcpServers": {
"grimoire": {
"command": "grimoire-mcp",
"args": ["--db", "/path/to/security_kb.db"]
}
}
}
| Tool | What it does |
|---|---|
grimoire_search |
Keyword, semantic, or hybrid search with severity/source filters |
grimoire_index_status |
Document count, embedding coverage, sources, last update |
grimoire_quality |
Health check; optionally test a query for result quality |
Configuration
database:
path: grimoire.db
ollama:
url: http://localhost:11434
model: nomic-embed-text
search:
default_mode: hybrid
semantic_weight: 0.6 # 60% semantic, 40% keyword
default_limit: 20
min_similarity: 0.3
quality:
min_cases: 5
gate_on_missing_eval: false
Search Algorithm
- FTS5 keyword — BM25 on title, content, and categories
- Semantic — cosine similarity between query and document embeddings (via Ollama
nomic-embed-text) - Score fusion —
hybrid = (0.4 * normalized_bm25) + (0.6 * cosine_sim) - Dedup — merge by document ID, sum scores
- Filter — min score, severity, source, max results
Requirements
- Python 3.10+
- SQLite with FTS5 (included in Python's
sqlite3) - Ollama +
nomic-embed-text(only needed for semantic/hybrid search — keyword works without it)
Development
pip install -e ".[dev]"
pytest
Threat Model
In scope — what Grimoire defends against:
- Knowledge staleness — auto-fetch from NVD, MITRE CWE, and OWASP keeps the index current; agents reference live CVE data, not training-cutoff snapshots
- Retrieval hallucination — hybrid search (BM25 + semantic) returns sourced, scored documents with provenance metadata; the agent cites indexed material, not confabulated details
- Exact-match failure — FTS5 keyword search guarantees that queries for specific identifiers (CVE-2024-1234, CWE-89) return exact matches, unlike pure vector search which can miss or rank them poorly
- Data exfiltration via cloud RAG — the entire pipeline runs offline against a local SQLite file; no document content is sent to external embedding APIs or vector databases
Out of scope — what Grimoire intentionally does not defend against:
- Poisoned source data — if upstream NVD or CWE feeds contain inaccurate information, Grimoire indexes it as-is; there is no cross-validation of ingested content
- Embedding model compromise — semantic search trusts the local Ollama embedding model; adversarial inputs crafted to manipulate
nomic-embed-textcould influence ranking - Access control on the knowledge base — the SQLite file and MCP server have no authentication; any process with filesystem or MCP access can query the full index
- Agent misuse of results — Grimoire returns relevant documents, but the consuming LLM may still misinterpret, selectively quote, or ignore them
Architecture
flowchart TB
subgraph Data Sources
NVD[NVD / CVE Feeds]
CWE[MITRE CWE Catalog]
OWASP[OWASP Top 10]
Custom[Markdown / CSV\nAudit Findings]
end
NVD --> Ingest
CWE --> Ingest
OWASP --> Ingest
Custom --> Ingest
subgraph Grimoire Core
Ingest[Ingestor Pipeline]
Ingest -->|documents| DB[(SQLite DB)]
DB -->|FTS5 index| FTS[BM25 Keyword Search]
DB -->|embedding vectors| Sem[Cosine Semantic Search]
Ollama[Local Ollama\nnomic-embed-text] -->|embeddings| DB
FTS --> Fusion[Score Fusion\n40% keyword + 60% semantic]
Sem --> Fusion
Fusion --> Results[Ranked Results\nwith provenance]
end
Results --> API[Python API\nGrimoire.search]
Results --> MCP[MCP Server\ngrimoire_search\ngrimoire_status\ngrimoire_quality]
MCP --> Agent([LLM Agent\ne.g. Claude Code])
API --> Scripts([Scripts / Pipelines])
Author
Tanner Nicol — tannner.com · GitHub · LinkedIn
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。