LightRAG MCP
Enables querying an internal LightRAG knowledge base to retrieve technical documentation and structured information using hybrid search modes. It facilitates the integration of raw text or HTML results from a LightRAG server into LLM workflows.
README
LightRAG MCP tool — README
Summary
- Tool name:
query_knowledge_base(registered asquery_knowledge_baseinlightrag_mcp.py)- If you prefer the tool be named
lightrag_query, change the decorator to@mcp.tool("lightrag_query")inlightrag_mcp.py.
- If you prefer the tool be named
- Description: Query the internal LightRAG knowledge base for technical documentation. Returns raw text or HTML from the LightRAG server.
- Input schema: QueryInput { query: str, mode: str = 'hybrid' }
- Output schema: QueryOutput { result: str }
Quick start (local)
- Create and activate a virtualenv, then install dependencies:
python -m pip install -r requirements.txt
(Or install packages individually: pip install modelcontextprotocol httpx pydantic python-dotenv)
- Configure environment variables:
- Copy the example env file and edit as needed:
cp .env.example .env
- Set a system prompt file (optional):
export LIGHTRAG_SYSTEM_PROMPT_FILE="$(pwd)/prompts/light_rag_system.txt"
- Run the MCP server:
python lightrag_mcp.py
Notes:
- Default FastMCP port is 8000. To change the port, set it in code before
mcp.run():
mcp.settings.port = 9680
mcp.run()
- FastMCP exposes streamable HTTP at
/mcp(default) and may expose SSE endpoints depending on transport.
Example: call the tool from Python (local test)
from importlib import util
import asyncio
spec = util.spec_from_file_location('mod','./lightrag_mcp.py')
mod = util.module_from_spec(spec)
spec.loader.exec_module(mod)
async def demo():
# Wrap the payload under 'input' when calling via mcp.call_tool
resp = await mod.mcp.call_tool('query_knowledge_base', {'input': {'query': 'How does the TMS module work?', 'mode': 'hybrid'}})
print(resp)
asyncio.run(demo())
The call returns a QueryOutput-like structure; depending on transport the response may be a JSON string or an object containing response/result fields.
Test connectivity to LightRAG server (direct)
If you need to verify the upstream LightRAG service the MCP calls, you can test it directly (example):
curl -X POST "http://localhost:9621/query" \
-H "Content-Type: application/json" \
-d '{"query":"Who owns the TMS module?","mode":"hybrid"}'
This helps verify the configured LIGHTRAG_URL is reachable and responding.
Configuration and environment variables
The server looks for these values in order:
LIGHTRAG_SYSTEM_PROMPT— inline prompt (env var)LIGHTRAG_SYSTEM_PROMPT_FILE— path to a file containing the promptprompts/light_rag_system.txt— project default
Recommended .env entries (see .env.example):
LIGHTRAG_SYSTEM_PROMPT_FILE=./prompts/light_rag_system.txt
LIGHTRAG_URL=http://localhost:9621
Security note: Avoid committing sensitive prompts to the repository. Use a secrets manager for production.
Transport security and allowed hosts example
If the client (e.g., remote Claude) will connect to your MCP server, restrict allowed hosts in FastMCP settings. Example (add in lightrag_mcp.py before mcp.run()):
# Restrict which remote hosts may call the MCP server (example)
mcp.settings.transport_security.allowed_hosts = ["agents.example.com", "claude.ai"]
Adjust based on your deployment and transport configuration.
Registering the tool with clients
Claude Code (VSCode extension / local development)
- Ensure the MCP server is running locally (e.g.,
http://localhost:8000). - In the Claude Code extension (or your local client), add or point to the server origin and streamable path (usually
/mcp). - The extension should detect registered tools; verify
query_knowledge_baseappears inmcp.list_tools().
Remote Claude clients (claude.ai)
- Expose your local server using a secure tunnel (for testing) or deploy it to a publicly reachable HTTPS endpoint.
- Example (temporary):
ngrok http 8000→ use the provided HTTPS URL
- Example (temporary):
- Configure transport settings and allowed hosts on the MCP server (see transport security example).
- Provide the server URL to claude.ai or the remote client in its external tools configuration and confirm the transport type (streamable HTTP vs SSE).
Troubleshooting
-
Tool not visible in client:
- Confirm the MCP server is running and reachable.
- Verify
query_knowledge_baseappears inmcp.list_tools(). - Ensure client and server use the same transport (streamable HTTP or SSE).
-
Validation errors:
- Client must send payload matching Pydantic input schema. When calling via the SDK or
mcp.call_tool, wrap the payload underinput.
- Client must send payload matching Pydantic input schema. When calling via the SDK or
-
Port already in use:
- Set
mcp.settings.portto a free port before callingmcp.run().
- Set
Development notes
-
lightrag_mcp.pyregisters the tool asquery_knowledge_base. To rename the tool tolightrag_query, either:- Change the decorator to
@mcp.tool("lightrag_query"), or - Update README to reference
query_knowledge_base(current state).
- Change the decorator to
-
A
requirements.txtfile is included with the minimal runtime dependencies. Pin versions if you need reproducible installs.
Contributing
PRs are welcome. Please avoid committing secrets or sensitive prompts. For changes that alter tool names or schemas, update README and tests accordingly.
License
MIT (or replace with your preferred license)
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。