
Prompt Cleaner MCP Server
Enables cleaning and sanitizing prompts through an LLM-powered tool that removes sensitive information, provides structured feedback with notes and risks, and normalizes prompt formatting. Supports configurable local or remote OpenAI-compatible APIs with automatic secret redaction.
README
Prompt Cleaner (MCP Server)
TypeScript MCP server exposing a prompt cleaning tool and health checks. All prompts route through cleaner
, with secret redaction, structured schemas, and client-friendly output normalization.
Features
- Tools
health-ping
: liveness probe returning{ ok: true }
.cleaner
: clean a raw prompt; returns structured JSON with retouched string, notes, openQuestions, risks, and redactions.
- Secret redaction: Sensitive patterns are scrubbed from logs and outputs in
src/redact.ts
. - Output normalization:
src/server.ts
converts content withtype: "json"
to plain text for clients that reject JSON content types. - Configurable: LLM base URL, API key, model, timeout, log level; optional local-only enforcement.
- Deterministic model policy: Single model via
LLM_MODEL
; no dynamic model selection/listing by default.
Requirements
- Node.js >= 20
Install & Build
npm install
npm run build
Run
- Dev (stdio server):
npm run dev
Inspector (Debugging)
Use the MCP Inspector to exercise tools over stdio:
npm run inspect
Environment
Configure via .env
or environment variables:
LLM_API_BASE
(string, defaulthttp://localhost:1234/v1
): OpenAI-compatible base URL.LLM_API_KEY
(string, optional): Bearer token for the API.LLM_MODEL
(string, defaultopen/ai-gpt-oss-20b
): Model identifier sent to the API.LLM_TIMEOUT_MS
(number, default60000
): Request timeout.LOG_LEVEL
(error|warn|info|debug
, defaultinfo
): Log verbosity (logs JSON to stderr).ENFORCE_LOCAL_API
(true|false
, defaultfalse
): Iftrue
, only allow localhost APIs.LLM_MAX_RETRIES
(number, default1
): Retry count for retryable HTTP/network errors.RETOUCH_CONTENT_MAX_RETRIES
(number, default1
): Retries when the cleaner returns non-JSON content.LLM_BACKOFF_MS
(number, default250
): Initial backoff delay in milliseconds.LLM_BACKOFF_JITTER
(0..1, default0.2
): Jitter factor applied to backoff.
Example .env
:
LLM_API_BASE=http://localhost:1234/v1
LLM_MODEL=open/ai-gpt-oss-20b
LLM_API_KEY=sk-xxxxx
LLM_TIMEOUT_MS=60000
LOG_LEVEL=info
ENFORCE_LOCAL_API=false
LLM_MAX_RETRIES=1
RETOUCH_CONTENT_MAX_RETRIES=1
LLM_BACKOFF_MS=250
LLM_BACKOFF_JITTER=0.2
Tools (API Contracts)
All tools follow MCP Tool semantics. Content is returned as [{ type: "json", json: <payload> }]
and normalized to type: "text"
by the server for clients that require it.
-
health-ping
- Input:
{}
- Output:
{ ok: true }
- Input:
-
cleaner
- Input:
{ prompt: string, mode?: "code"|"general", temperature?: number }
- Output:
{ retouched: string, notes?: string[], openQuestions?: string[], risks?: string[], redactions?: ["[REDACTED]"][] }
- Behavior: Applies a system prompt from
prompts/cleaner.md
, calls the configured LLM, extracts first JSON object, validates with Zod, and redacts secrets.
- Input:
-
sanitize-text (alias of
cleaner
)- Same input/output schema and behavior as
cleaner
. Exposed for agents that keyword-match on “sanitize”, “PII”, or “redact”.
- Same input/output schema and behavior as
-
normalize-prompt (alias of
cleaner
)- Same input/output schema and behavior as
cleaner
. Exposed for agents that keyword-match on “normalize”, “format”, or “preprocess”.
- Same input/output schema and behavior as
Per-call API key override
src/llm.ts
accepts apiKey
in options for per-call overrides; falls back to LLM_API_KEY
.
Project Structure
src/server.ts
: MCP server wiring, tool listing/calls, output normalization, logging.src/tools.ts
: Tool registry and dispatch.src/cleaner.ts
: Cleaner pipeline and JSON extraction/validation.src/llm.ts
: LLM client with timeout, retry, and error normalization.src/redact.ts
: Secret redaction utilities.src/config.ts
: Environment configuration and validation.test/*.test.ts
: Vitest suite covering tools, shapes, cleaner, and health.
Testing
npm test
Design decisions
- Single-model policy: Uses
LLM_MODEL
from environment; no model listing/selection tool to keep behavior deterministic and reduce surface area. - Output normalization:
src/server.ts
convertsjson
content totext
for clients that reject JSON. - Secret redaction:
src/redact.ts
scrubs sensitive tokens from logs and outputs.
Troubleshooting
- LLM timeout: Increase
LLM_TIMEOUT_MS
; check network reachability toLLM_API_BASE
. - Non-JSON from cleaner: Retries up to
RETOUCH_CONTENT_MAX_RETRIES
. If persistent, reducetemperature
or ensure the configured model adheres to the output contract. - HTTP 5xx from LLM: Automatic retries up to
LLM_MAX_RETRIES
with exponential backoff (LLM_BACKOFF_MS
,LLM_BACKOFF_JITTER
). - Local API enforcement error: If
ENFORCE_LOCAL_API=true
,LLM_API_BASE
must point to localhost. - Secrets in logs/outputs: Redaction runs automatically; if you see leaked tokens, update patterns in
src/redact.ts
.
Windsurf (example)
Add an MCP server in Windsurf settings, pointing to the built stdio server:
{
"mcpServers": {
"prompt-cleaner": {
"command": "node",
"args": ["/absolute/path/to/prompt-cleaner/dist/server.js"],
"env": {
"LLM_API_BASE": "http://localhost:1234/v1",
"LLM_API_KEY": "sk-xxxxx",
"LLM_MODEL": "open/ai-gpt-oss-20b",
"LLM_TIMEOUT_MS": "60000",
"LOG_LEVEL": "info",
"ENFORCE_LOCAL_API": "false",
"LLM_MAX_RETRIES": "1",
"RETOUCH_CONTENT_MAX_RETRIES": "1",
"LLM_BACKOFF_MS": "250",
"LLM_BACKOFF_JITTER": "0.2"
}
}
}
}
Usage:
- In a chat, ask the agent to use
cleaner
with your raw prompt. - Or invoke tools from the agent UI if exposed by your setup.
LLM API compatibility
- Works with OpenAI-compatible Chat Completions APIs (e.g., LM Studio local server) that expose
/v1/chat/completions
. - Configure via
LLM_API_BASE
and optionalLLM_API_KEY
. UseENFORCE_LOCAL_API=true
to restrict to localhost for development. - Set
LLM_MODEL
to the provider-specific model identifier. This server follows a single-model policy for determinism and reproducibility. - Providers must return valid JSON; the cleaner includes limited retries when content is not strictly JSON.
Links
- Model Context Protocol (spec): https://modelcontextprotocol.io
- Cleaner system prompt:
prompts/cleaner.md
Notes
- Logs are emitted to stderr as JSON lines to avoid interfering with MCP stdio.
- Some clients reject
json
content types; this server normalizes them totext
automatically.
Security
- Secrets are scrubbed by
src/redact.ts
from logs and cleaner outputs. ENFORCE_LOCAL_API=true
restricts usage to local API endpoints.
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。