mcp-arsr
Adaptive Retrieval-Augmented Self-Refinement MCP Server — a closed-loop system that lets LLMs iteratively verify and correct their own claims using uncertainty-guided retrieval.
README
ARSR MCP Server
Adaptive Retrieval-Augmented Self-Refinement — a closed-loop MCP server that lets LLMs iteratively verify and correct their own claims using uncertainty-guided retrieval.
What it does
Unlike one-shot RAG (retrieve → generate), ARSR runs a refinement loop:
Generate draft → Decompose claims → Score uncertainty
↑ ↓
Decide stop ← Revise with evidence ← Retrieve for low-confidence claims
The key insight: retrieval is guided by uncertainty. Only claims the model is unsure about trigger evidence fetching, and the queries are adversarial — designed to disprove the claim, not just confirm it.
Architecture
The server exposes 6 MCP tools. The outer LLM (Claude, GPT, etc.) orchestrates the loop by calling them in sequence:
| # | Tool | Purpose |
|---|---|---|
| 1 | arsr_draft_response |
Generate initial candidate answer (returns is_refusal flag) |
| 2 | arsr_decompose_claims |
Split into atomic verifiable claims |
| 3 | arsr_score_uncertainty |
Estimate confidence via semantic entropy |
| 4 | arsr_retrieve_evidence |
Web search for low-confidence claims |
| 5 | arsr_revise_response |
Rewrite draft with evidence |
| 6 | arsr_should_continue |
Decide: iterate or finalize |
Inner LLM: Tools 1-5 use Claude Haiku internally for intelligence (query generation, claim extraction, evidence evaluation). This keeps costs low while the outer model handles orchestration.
Refusal detection: arsr_draft_response returns a structured is_refusal flag (classified by the inner LLM) indicating whether the draft is a non-answer. When is_refusal is true, downstream tools (decompose, revise) pivot to extracting claims from the original query and building an answer from retrieved evidence instead of trying to refine a refusal.
Web Search: arsr_retrieve_evidence uses the Anthropic API's built-in web search tool — no external search API keys needed.
Setup
Prerequisites
- Node.js 18+
- An Anthropic API key
Install & Build
cd arsr-mcp-server
npm install
npm run build
Environment
export ANTHROPIC_API_KEY="sk-ant-..."
Run
stdio mode (for Claude Desktop, Cursor, etc.):
npm start
HTTP mode (for remote access):
TRANSPORT=http PORT=3001 npm start
Claude Desktop Configuration
Add to your claude_desktop_config.json:
Npm:
{
"mcpServers": {
"arsr": {
"command": "npx",
"args": ["@jayarrowz/mcp-arsr"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"ARSR_MAX_ITERATIONS": "3",
"ARSR_ENTROPY_SAMPLES": "3",
"ARSR_RETRIEVAL_STRATEGY": "adversarial",
"ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
}
}
}
}
Local build:
{
"mcpServers": {
"arsr": {
"command": "node",
"args": ["/path/to/arsr-mcp-server/dist/src/index.js"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"ARSR_MAX_ITERATIONS": "3",
"ARSR_ENTROPY_SAMPLES": "3",
"ARSR_RETRIEVAL_STRATEGY": "adversarial",
"ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
}
}
}
}
How the outer LLM uses it
The orchestrating LLM calls the tools in sequence:
1. draft = arsr_draft_response({ query: "When was Tesla founded?" })
// draft.is_refusal indicates if the inner LLM refused to answer
2. claims = arsr_decompose_claims({ draft: draft.draft, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
3. scored = arsr_score_uncertainty({ claims: claims.claims })
4. low = scored.scored.filter(c => c.confidence < 0.85)
5. evidence = arsr_retrieve_evidence({ claims_to_check: low })
6. revised = arsr_revise_response({ draft: draft.draft, evidence: evidence.evidence, scored: scored.scored, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
7. decision = arsr_should_continue({ iteration: 1, scored: revised_scores })
→ if "continue": go to step 2 with revised text
→ if "stop": return revised.revised to user
Configuration
All settings can be overridden via environment variables, falling back to defaults if unset:
| Setting | Env var | Default | Description |
|---|---|---|---|
max_iterations |
ARSR_MAX_ITERATIONS |
3 |
Budget limit for refinement loops |
confidence_threshold |
ARSR_CONFIDENCE_THRESHOLD |
0.85 |
Claims above this skip retrieval |
entropy_samples |
ARSR_ENTROPY_SAMPLES |
3 |
Rephrasings for semantic entropy |
retrieval_strategy |
ARSR_RETRIEVAL_STRATEGY |
adversarial |
adversarial, confirmatory, or balanced |
inner_model |
ARSR_INNER_MODEL |
claude-haiku-4-5-20251001 |
Model for internal intelligence |
Cost estimate
Per refinement loop iteration (assuming ~5 claims, 3 low-confidence):
- Inner LLM calls: ~6-10 Haiku calls ≈ $0.002-0.005
- Web searches: 6-9 queries ≈ included in API
- Typical total for 2 iterations: < $0.02
Images
Before:
<img width="956" height="977" alt="claude_596yXSQSOh" src="https://github.com/user-attachments/assets/95771a10-8a29-4900-b128-67af3cbc05bd" />
After:
<img width="856" height="866" alt="claude_UagHKfgqDz" src="https://github.com/user-attachments/assets/340e8011-4c2d-4e95-9c4d-43a55e87b7be" />
<img width="800" height="342" alt="claude_WZGa6xqUip" src="https://github.com/user-attachments/assets/dbc364c2-1925-427a-a979-cd1fade38f1d" />
<img width="777" height="578" alt="claude_KedQnUoSue" src="https://github.com/user-attachments/assets/0e57f578-a9c2-4325-9b6e-61d7a42f3ee8" />
License
MIT
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。