Cloudflare AutoRAG MCP Server

Cloudflare AutoRAG MCP Server

Provides search capabilities for Cloudflare AutoRAG instances, enabling AI assistants like Claude to directly search and query knowledge bases using three distinct search methods: basic search, rewrite search, and AI search.

Category
访问服务器

README

Cloudflare AutoRAG MCP Server

A Model Context Protocol (MCP) server that provides search capabilities for Cloudflare AutoRAG instances. This server enables AI assistants like Claude to directly search and query your AutoRAG knowledge base using three distinct search methods.

Features

  • 🔍 Basic Search - Vector similarity search without query rewriting or answer generation
  • ✏️ Rewrite Search - Vector search with AI query rewriting but no answer generation (returns document chunks only)
  • 🤖 AI Search - Full AI-powered search with optional AI response and configurable query rewriting
  • ⚙️ Configurable Parameters - Support for score_threshold (default: 0.5) and max_num_results (1-50, default: 10)
  • 📄 Pagination Support - AI search supports cursor-based pagination for large result sets (v1.2.0+)
  • 🏢 Multi-AutoRAG Support - Manage and search across multiple AutoRAG instances (v2.0.0+)
  • 🌐 Remote Deployment - Runs on Cloudflare Workers for scalability
  • 🔗 MCP Compatible - Works with Claude Desktop and other MCP clients

Tools

autorag_basic_search

Performs a basic vector similarity search in your Cloudflare AutoRAG index without AI query rewriting or answer generation. Returns raw document chunks only.

Parameters:

  • query (string, required) - The search query text (max 10,000 characters)
  • score_threshold (number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)
  • max_num_results (number, optional) - Maximum number of results to return (1-50, default: 10)
  • autorag_name (string, optional) - Name of the AutoRAG instance to use (defaults to configured default)

autorag_rewrite_search

Performs a vector search with AI query rewriting but no answer generation. Uses Cloudflare's search() method with configurable rewrite_query for better semantic matching and returns only document chunks.

Parameters:

  • query (string, required) - The search query text (max 10,000 characters)
  • score_threshold (number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)
  • max_num_results (number, optional) - Maximum number of results to return (1-50, default: 10)
  • rewrite_query (boolean, optional) - Whether to rewrite query for better matching (default: true)
  • autorag_name (string, optional) - Name of the AutoRAG instance to use (defaults to configured default)

autorag_ai_search

Performs AI-powered search using Cloudflare's aiSearch() method with optional AI-generated response. Returns document chunks and optionally an AI answer based on the include_ai_response parameter. Supports pagination for large result sets.

Parameters:

  • query (string, required) - The search query text (max 10,000 characters)
  • score_threshold (number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)
  • max_num_results (number, optional) - Maximum number of results to return (1-50, default: 10)
  • rewrite_query (boolean, optional) - Whether to rewrite the query for better semantic matching (default: true)
  • include_ai_response (boolean, optional) - Whether to include the AI-generated response in the output (default: false)
  • cursor (string, optional) - Pagination cursor from previous response to fetch next page of results (v1.2.0+)
  • autorag_name (string, optional) - Name of the AutoRAG instance to use (defaults to configured default)

Response includes:

  • data - Array of source document chunks with scores and metadata (always included)
  • response - AI-generated answer based on retrieved documents (only when include_ai_response: true)
  • has_more - Boolean indicating if more results are available
  • next_page - Cursor token for fetching the next page (when has_more is true)
  • nextCursor - MCP-compliant cursor field (mirrors next_page value)

list_autorags (v2.0.0+)

Lists all available AutoRAG instances configured in the server.

Parameters: None

Response includes:

  • autorags - Array of AutoRAG instances with name, description, and is_default flag
  • total - Total number of configured AutoRAG instances
  • default - Name of the default AutoRAG instance

get_current_autorag (v2.0.0+)

Gets information about the currently configured default AutoRAG instance.

Parameters: None

Response includes:

  • current_autorag - Name of the current default AutoRAG instance
  • description - Description of the instance
  • is_default - Always true for this endpoint

Prerequisites

  1. Cloudflare Account with AutoRAG access
  2. AutoRAG Instance - Created and indexed in your Cloudflare account
  3. Wrangler CLI - For deployment (npm install --save-dev wrangler)

Deployment

  1. Clone the repository:

    git clone <repository-url>
    cd cf-autorag-mcp
    
  2. Install dependencies:

    npm install
    
  3. Configure your AutoRAG instance: Edit wrangler.toml and update the configuration:

    For a single AutoRAG instance:

    [vars]
    AUTORAG_NAME = "your-autorag-instance-name"
    

    For multiple AutoRAG instances:

    [vars]
    AUTORAG_INSTANCES = "instance1,instance2,instance3"
    AUTORAG_DESCRIPTIONS = "Description 1,Description 2,Description 3"
    
  4. Deploy to Cloudflare Workers:

    npx wrangler deploy
    

    This will output your Worker URL, which you'll need for the MCP client configuration.

Claude Desktop Configuration

To use this MCP server with Claude Desktop, add the following configuration to your Claude Desktop config file:

macOS

Edit ~/Library/Application Support/Claude/claude_desktop_config.json:

Windows

Edit %APPDATA%/Claude/claude_desktop_config.json:

Configuration

{
  "mcpServers": {
    "cf-autorag-mcp": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://your-worker-url.workers.dev/"
      ]
    }
  }
}

Replace https://your-worker-url.workers.dev/ with your actual deployed Worker URL.

After updating the configuration:

  1. Restart Claude Desktop
  2. You should see the AutoRAG search tools available in your conversation

Configuration

Environment Variables

The server uses the following Cloudflare Worker bindings:

  • AI - Cloudflare AI binding for AutoRAG access (handles all AutoRAG operations)
  • AUTORAG_NAME - Your AutoRAG instance name (for single instance configuration)
  • AUTORAG_INSTANCES - Comma-separated list of AutoRAG instances (for multi-instance configuration)
  • AUTORAG_DESCRIPTIONS - Comma-separated list of descriptions for each instance

Wrangler Configuration

The wrangler.toml file includes:

name = "cf-autorag-mcp"
main = "src/server.ts"
compatibility_date = "2024-09-23"
compatibility_flags = ["nodejs_compat"]

[vars]
# For single AutoRAG instance:
AUTORAG_NAME = "your-autorag-instance-name"

# For multiple AutoRAG instances (v2.0.0+):
# AUTORAG_INSTANCES = "default-autorag,secondary-autorag,specialized-autorag"
# AUTORAG_DESCRIPTIONS = "Main knowledge base,Secondary knowledge base,Specialized documents"

[ai]
binding = "AI"

Note: The VECTORIZE binding is not required. AutoRAG manages its own vector index access internally through the AI binding.

Usage Examples

Once configured with Claude Desktop, you can use the tools like this:

Basic Search (no query rewriting, no AI response):

Search for documents about "machine learning" in my AutoRAG with a minimum score threshold of 0.7

Rewrite Search (AI query rewriting, no AI response):

Use rewrite search to find information about "deployment strategies" with query rewriting enabled

AI Search with Document Chunks Only (default behavior):

Use AI search to find information about "deployment strategies" with max 5 results

AI Search with AI-Generated Response:

Use AI search to find information about "deployment strategies" and include the AI-generated response

Multi-AutoRAG Usage (v2.0.0+):

List all available AutoRAG instances

Search for "security policies" in the secondary-autorag instance

Use AI search in specialized-autorag to find "compliance requirements" with AI response

Important Notes:

  • autorag_basic_search performs pure vector search without any AI enhancements
  • autorag_rewrite_search uses AI query rewriting but returns document chunks only
  • autorag_ai_search by default returns document chunks only (letting the client LLM generate responses), but can optionally include Cloudflare's AI-generated response
  • All tools use a default score threshold of 0.5 if not specified
  • All tools support the same parameter structure for consistent usage
  • Metadata filtering is not supported in Workers bindings - use the REST API if you need filtered queries

Development

Local Development

# Start local development server
npm run dev

# Build for production
npm run build

Project Structure

cf-autorag-mcp/
├── src/
│   └── server.ts          # Main MCP server implementation
├── wrangler.toml          # Cloudflare Workers configuration
├── package.json           # Dependencies and scripts
└── README.md              # This file

Technical Details

  • Protocol: JSON-RPC 2.0 over HTTP
  • Runtime: Cloudflare Workers with Node.js compatibility
  • MCP Version: 2024-11-05
  • Transport: HTTP-based (no streaming)
  • Default Score Threshold: 0.5 for all search tools
  • Parameter Validation: Comprehensive validation with clear error messages

Troubleshooting

Common Issues

  1. "AutoRAG instance not found"

    • Verify your AUTORAG_NAME in wrangler.toml
    • Ensure your AutoRAG instance is properly created and indexed
  2. "MCP server disconnected"

    • Check that your Worker URL is correct in the Claude Desktop config
    • Verify the Worker is deployed and accessible
  3. "Tool not found" errors

    • Restart Claude Desktop after configuration changes
    • Check the Worker logs: npx wrangler tail
  4. Empty search results

    • Try lowering the score_threshold parameter (default is 0.5)
    • Verify your AutoRAG index has been populated with documents
    • Check that your query terms exist in the indexed content

Logs

View real-time logs from your deployed Worker:

npx wrangler tail

Version History

  • v2.0.0 - Multi-AutoRAG support, enhanced schema documentation, removed VECTORIZE binding
  • v1.2.0 - Added cursor-based pagination support for AI search tool
  • v1.1.3 - Removed filters parameter (not supported in Workers bindings), added helpful error messages for filter attempts
  • v1.1.2 - Attempted to fix filter format (discovered Workers bindings don't support filters)
  • v1.1.1 - Added include_ai_response parameter to AI search tool, default score threshold of 0.5, comprehensive parameter validation
  • v1.1.0 - Added three distinct search tools with boolean parameter support
  • v1.0.0 - Initial release

License

This project is licensed under the MIT License.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

Support

For issues related to:

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选