MCP Jina Supabase RAG
Crawls and indexes documentation websites to Supabase with vector embeddings for RAG, using smart sitemap discovery and Jina AI for fast content extraction with multi-project support.
README
MCP Jina Supabase RAG
A lean, focused MCP server for crawling documentation websites and indexing them to Supabase for RAG (Retrieval-Augmented Generation).
Features
- Smart URL Discovery: Tries sitemap.xml first, falls back to Crawl4AI recursive discovery
- Hybrid Content Extraction: Uses Jina AI for fast content extraction, Crawl4AI as fallback
- Multi-Project Support: Index multiple documentation sites to separate Supabase projects
- Efficient Chunking: Intelligent text chunking with configurable size and overlap
- Vector Embeddings: OpenAI embeddings stored in Supabase pgvector
Architecture
┌─────────────────────────────────────────────────────────────┐
│ MCP Server Tools │
├─────────────────────────────────────────────────────────────┤
│ 1. crawl_and_index(url_pattern, project_name) │
│ 2. list_projects() │
│ 3. search_documents(query, project_name, limit) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Discovery Layer │
├─────────────────────────────────────────────────────────────┤
│ • Try sitemap.xml (fast) │
│ • Try common doc patterns │
│ • Crawl4AI recursive discovery (fallback) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Extraction Layer │
├─────────────────────────────────────────────────────────────┤
│ • Jina AI Reader API (primary, fast) │
│ • Crawl4AI (fallback for complex pages) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Chunking & Embedding Layer │
├─────────────────────────────────────────────────────────────┤
│ • Smart text chunking │
│ • OpenAI embeddings (text-embedding-3-small) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Supabase Storage │
├─────────────────────────────────────────────────────────────┤
│ • pgvector for similarity search │
│ • Project isolation via source column │
└─────────────────────────────────────────────────────────────┘
Installation
Prerequisites
- Python 3.12+
- Supabase account
- OpenAI API key
- Jina AI API key (optional, recommended)
Setup
- Clone the repository:
git clone https://github.com/yourusername/mcp-jina-supabase-rag.git
cd mcp-jina-supabase-rag
- Install dependencies:
# Using uv (recommended)
uv venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
uv pip install -e .
# Or using pip
pip install -e .
- Set up Supabase database:
# Run the SQL in supabase_schema.sql in your Supabase SQL Editor
- Configure environment:
cp .env.example .env
# Edit .env with your credentials
Usage
Running the MCP Server
# SSE transport (recommended for remote connections)
python src/main.py
# The server will start on http://localhost:8052/sse
Configure MCP Client
Claude Code
claude mcp add --transport sse jina-supabase http://localhost:8052/sse
Cursor / Claude Desktop
{
"mcpServers": {
"jina-supabase": {
"transport": "sse",
"url": "http://localhost:8052/sse"
}
}
}
Slash Command
Create /home/marty/.claude/commands/jina.md:
---
allowed-tools: mcp__jina-supabase
argument-hint: <url_pattern> <project_name>
description: Crawl documentation and index to Supabase RAG
---
# Index Documentation to Supabase
Use the jina-supabase MCP server to crawl and index documentation.
Arguments:
- $1: URL pattern (e.g., https://docs.example.com/*)
- $2: Project name for isolation
Example:
/jina https://docs.anthropic.com/claude/* anthropic-docs
Tools
crawl_and_index
Crawl a documentation site and index to Supabase.
Parameters:
url_pattern(string): URL or pattern to crawlproject_name(string): Project identifier for isolationdiscovery_method(string, optional):auto,sitemap, orcrawlextraction_method(string, optional):auto,jina, orcrawl4ai
Example:
await crawl_and_index(
url_pattern="https://docs.supabase.com/docs/*",
project_name="supabase-docs",
discovery_method="auto",
extraction_method="jina"
)
list_projects
List all indexed projects.
Returns: List of project names with document counts
search_documents
Search indexed documents using vector similarity.
Parameters:
query(string): Search queryproject_name(string, optional): Filter by projectlimit(int, optional): Max results (default: 5)
Example:
results = await search_documents(
query="How do I set up authentication?",
project_name="supabase-docs",
limit=10
)
Configuration
See .env.example for all configuration options.
Discovery Methods
auto: Try sitemap first, fallback to crawlsitemap: Only use sitemap.xml (fast, fails if no sitemap)crawl: Only use Crawl4AI recursive discovery (slow, comprehensive)
Extraction Methods
auto: Use Jina for bulk extraction (>10 URLs), Crawl4AI otherwisejina: Use Jina AI Reader API (fast, requires API key)crawl4ai: Use Crawl4AI browser automation (slow, no API key needed)
Development
# Install dev dependencies
uv pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src/
# Lint
ruff check src/
Differences from mcp-crawl4ai-rag
| Feature | mcp-crawl4ai-rag | mcp-jina-supabase-rag |
|---|---|---|
| Focus | Full-featured RAG with knowledge graphs | Lean documentation indexer |
| Discovery | Recursive only | Sitemap first, crawl fallback |
| Extraction | Crawl4AI only | Jina primary, Crawl4AI fallback |
| Dependencies | Heavy (Neo4j, etc.) | Light (core only) |
| Use Case | Advanced RAG with hallucination detection | Fast doc indexing |
License
MIT
Contributing
Contributions welcome! Please open an issue first to discuss changes.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。