OmniDocs MCP

OmniDocs MCP

An intelligent MCP server that enables AI agents to crawl, index, and semantically search official framework documentation using local RAG. It prevents hallucinations by providing precise, up-to-date documentation excerpts directly into the AI's context window.

Category
访问服务器

README

OmniDocs MCP

Python 3.10+ Local RAG FastMCP

OmniDocs MCP is an intelligent Model Context Protocol (MCP) server that empowers AI agents to instantly search, index, summarize, and inject live framework documentation directly into their context window.

Stop hallucinating code for outdated framework versions. Let OmniDocs fetch the exact documentation your AI needs, the moment it needs it.

[AI Agent] Calling get_library_docs("react", "useActionState usage")
[OmniDocs] 🔍 Library 'react' not indexed. Crawling react.dev...
[OmniDocs] 🧠 Chunking 150 pages and computing embeddings (ONNX)...
[OmniDocs] ⚡ Returning top 5 semantic chunks (Dense + BM25)
[AI Agent] Receives 1,500 highly-relevant tokens. Writes perfect code.

🤔 Why OmniDocs? (The Comparison)

Approach The Problem The OmniDocs Solution
Context Stuffing (Full URLs) Destroys token limits (50k+ tokens/page); high latency; high API costs. Semantically retrieves only the relevant 512-token chunks to save context.
Web Search Tools (Tavily/Exa) Returns SEO fluff, outdated blog posts, and stack overflow threads. Exclusively targets official, canonical framework documentation.
Cloud RAG / Vector APIs Requires expensive API subscriptions and sends queries to 3rd parties. 100% Local embedding (ONNX + ChromaDB). Zero API keys, completely free.
LLM Internal Knowledge Hallucinates deprecated APIs (e.g., React 17 vs 19, or Next.js App Router). Guarantees up-to-date syntax directly from the live documentation.

✨ Core Features

  • Deep HTML Crawling: Employs an Indexer & Sniper architecture to map deep documentation sites via XML Sitemaps or pure HTML-crawling, returning dense Tables of Contents for agents to navigate.
  • Local RAG & Semantic Search: Embeds documentation locally using ONNX (via fastembed) and chunks it semantically. Exposes a natural language query interface so agents receive precise, high-density excerpts instead of full massive pages.
  • Local Manifest Auto-Discovery: Point OmniDocs at any package.json or requirements.txt. It will seamlessly communicate with the NPM/PyPI registries to auto-discover library documentation URLs and register them in its tracking file.
  • Persistent Disk Caching: Prevents excessive redundant scraping and LLM token usage by storing fetched markdown via diskcache, allowing the user to configure granular TTLs (Time-To-Live).

🏗 Architecture & How it Works

OmniDocs operates as a middleware server between an AI Agent and official documentation websites. Instead of the AI browsing the web blindly, it uses OmniDocs to precisely retrieve, parse, chunk, embed, and cache documentation locally.

Core Modules

  1. Server CLI (server.py): The main entry point. Exposes get_library_docs which agents use to ask natural language questions.
  2. Fetcher (fetcher.py): Handles outbound HTTP requests, crawling Sitemaps and pure HTML. Uses BeautifulSoup to strip away navbars and footers, and markdownify to convert perfectly to Markdown.
  3. Chunker (chunker.py): Splits massive Markdown pages into smaller, semantically coherent 512-token chunks, keeping Markdown headers intact so the context isn't lost.
  4. Vector Store (vector_store.py): Embeds chunks locally using the fastembed ONNX model and stores them persistently in ChromaDB. Uses a hybrid retrieval method (Dense Vector Search + BM25 keyword re-ranking) for maximum accuracy on exact API names.
  5. Cache Layer (cache.py): Uses diskcache to store the raw downloaded Markdown on the local hard drive to prevent redundant network requests.
  6. Auto-Discovery (discovery.py): Parses local package.json or requirements.txt files to auto-register libraries.

🔄 Retrieval Workflow

When an AI encounters a library it doesn't know, it just issues a natural language query, and the following flow occurs:

sequenceDiagram
    participant AI as AI Agent
    participant MCP as OmniDocs
    participant VectorDB as ChromaDB
    participant Web as fetcher.py
    
    AI->>MCP: Call `get_library_docs("react", "useActionState usage")`
    MCP->>VectorDB: Check if 'react' is indexed
    
    alt Not Indexed
        MCP->>Web: Crawl entire doc site & convert to Markdown
        Web-->>MCP: Return Markdown pages
        MCP->>MCP: Chunk pages & compute local embeddings
        MCP->>VectorDB: Store chunks & vectors
    end
    
    MCP->>VectorDB: Perform hybrid search (Dense + BM25) for query
    VectorDB-->>MCP: Top 5 semantic chunks
    MCP-->>AI: Return pure, precise Markdown context

🚀 Quick Start

Prerequisites

  • Python 3.10+ (Required for FastMCP and ChromaDB)
  • OS: Windows, macOS, or Linux
  • Hardware: Runs entirely on CPU. No GPU required (FastEmbed uses lightweight ONNX models).
  1. Clone & Install

    git clone https://github.com/your-username/omnidocs-mcp.git
    cd omnidocs-mcp
    pip install -r requirements.txt
    
  2. Seed your Libraries OmniDocs stores your tracked libraries in libraries.yaml. You can auto-fill this by running the server tool auto_import_from_manifest against your project, or manually insert overrides to customize tracking:

    libraries:
      react:
        docs_url: https://react.dev/learn
        ttl_hours: 24
      fastapi:
        docs_url: https://fastapi.tiangolo.com
        ttl_hours: 48
      tailwindcss:
        docs_url: https://tailwindcss.com/docs
        ttl_hours: 72
    

🔌 Connecting to AI Agents (Antigravity, Roo/Cline, Claude Desktop)

To connect OmniDocs to your MCP-compatible client, add this configuration block to your client's MCP settings file (e.g., %APPDATA%\Code\User\globalStorage\rooveterinaryinc.roo-cline\settings\cline_mcp_settings.json):

{
  "mcpServers": {
    "omnidocs-mcp": {
      "command": "C:/absolute/path/to/omnidocs-mcp/venv/Scripts/python.exe",
      "args": [
        "C:/absolute/path/to/omnidocs-mcp/server.py"
      ],
      "env": {}
    }
  }
}

Note: Ensure you point the command variable to the Python executable from inside the virtual environment where you installed the requirements.txt dependencies.

🛠 Available Tools

Once connected, your AI gains the following native tools:

  • get_library_docs(library, query): The primary tool. Performs a semantic vector search and BM25 hybrid ranking over the library's documentation to answer specific questions, automatically crawling if not yet indexed.
  • get_changelog(library): Fetches recent release notes so the AI knows about breaking changes.
  • auto_import_from_manifest(manifest_path): Analyzes your package.json to self-populate the OmniDocs library registry.
  • list_tracked_libraries(): Shows what the server is currently tracking.
  • refresh_all_docs(): Hard-busts the cache and pulls live web data.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选