query-sanitizer-mcp
A local DLP middleware that redacts sensitive information from prompts using local models before they reach external LLMs. It provides tools to sanitize queries, restore placeholders in responses, and manage a ledger of redactions to maintain data privacy.
README
query-sanitizer-mcp
A lightweight MCP middleware that sits between your prompts and external LLMs, automatically redacting sensitive data using a local model (Ollama / LM Studio) before anything leaves your machine.
[Your Prompt] → sanitize_query() → [Safe Prompt] → External LLM → [Response] → restore_response() → [You]
Why
Every time you paste internal context into Claude, ChatGPT, or any cloud LLM, you risk leaking:
- Employee names & emails
- Internal project codenames
- Infrastructure details (IPs, hostnames, DB names)
- API keys & credentials
- Company names, deal sizes, legal references
This MCP server intercepts that text, runs it through a local DLP model, replaces sensitive tokens with typed placeholders ([ORG_NAME_1], [PII_NAME_1], etc.), and restores them in the response — so you see natural text, the cloud LLM never sees the real values.
Tools
| Tool | Description |
|---|---|
sanitize_query(text) |
Redact sensitive data. Returns safe text + san_id for later restore. |
restore_response(text, san_id) |
Swap placeholders back to originals using the ledger. |
view_ledger(last_n) |
Show recent sanitization history. |
Setup
Requirements: Python 3.10+, Ollama or LM Studio running locally.
git clone https://github.com/vidoluco/query-sanitizer-mcp
cd query-sanitizer-mcp
python3.12 -m venv .venv
.venv/bin/pip install fastmcp
Start your local model
# Ollama
ollama pull llama3.2
ollama serve
# LM Studio — just load a model and start the local server on port 1234
Add to Claude Code
Merge into ~/.claude/settings.json:
{
"mcpServers": {
"query-sanitizer": {
"command": "/path/to/query-sanitizer-mcp/.venv/bin/python",
"args": ["/path/to/query-sanitizer-mcp/server.py"],
"env": {
"SANITIZER_MODEL_URL": "http://localhost:11434/v1/chat/completions",
"SANITIZER_MODEL_NAME": "llama3.2"
}
}
}
}
For LM Studio, change the env vars:
"SANITIZER_MODEL_URL": "http://localhost:1234/v1/chat/completions",
"SANITIZER_MODEL_NAME": "your-loaded-model-name"
Configuration
Create .sanitizer-ledger/config.json to boost detection accuracy for your org:
{
"org_names": ["Acme Corp", "Acme"],
"org_domains": ["acme.com", "acme.internal"],
"project_codenames": ["Phoenix", "Titan"],
"known_employees": ["John Smith"],
"internal_ip_ranges": ["10.0.0.0/8", "172.16.0.0/12"],
"always_allow": ["Google Cloud", "Kubernetes", "BigQuery"]
}
Or run the included CLI:
python scripts/ledger.py init-config
How it works
The local model receives a strict DLP system prompt and returns JSON with:
sanitized_text— the safe version of your promptmappings— a list of what was replaced and why
A ledger entry (.sanitizer-ledger/ledger.jsonl) is written per operation, enabling the restore step. Credentials are blocked entirely — never stored, never passed through.
Redaction categories
| Category | Examples | Severity |
|---|---|---|
CREDENTIAL |
API keys, tokens, passwords | CRITICAL — blocked |
INTERNAL_URL |
Intranet URLs, staging endpoints | CRITICAL |
PII_NAME |
Names, emails, phone numbers | HIGH |
ORG_NAME |
Company / subsidiary names | HIGH |
PROJECT_NAME |
Internal codenames | MEDIUM |
INFRA |
IPs, hostnames, DB names | MEDIUM |
FINANCIAL |
Revenue, deal sizes, budgets | MEDIUM |
LEGAL |
Contract terms, case numbers | HIGH |
Contributing
This is an early proof of concept — feedback and contributions very welcome.
Ideas for where this could go:
- [ ] Auto-suggest ledger config entries from detected patterns
- [ ] Claude Code hook integration (pre-prompt hook that auto-sanitizes)
- [ ] Confidence threshold config
- [ ] Batch / bulk sanitization mode
- [ ] Support for code block scanning (inline secrets, import paths)
- [ ] Web UI for ledger review
Open an issue or send a PR.
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。