Personal Research Assistant MCP
Enables semantic search and conversational querying across a personal research library of PDFs, DOCX, and other documents using a vector database. It provides tools for document summarization, finding related papers, and high-accuracy retrieval for AI clients like Claude Desktop.
README
📚 Personal Research Assistant MCP
A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.
🎯 Problem Statement
Researchers and professionals accumulate dozens of papers and documents but struggle to:
- Find relevant information across multiple documents
- Remember which paper contained specific insights
- Connect related concepts across different sources
- Spend 2+ hours daily searching for information
Traditional keyword search misses semantic connections, and reading everything is impractical.
💡 Solution
An MCP server that:
- Indexes documents into a vector database using semantic embeddings
- Enables Claude (or any MCP client) to query your research library conversationally
- Provides sub-500ms response times with 85%+ retrieval accuracy
- Includes a Streamlit dashboard for management and metrics
🏗️ Architecture
Documents (PDF/DOCX/HTML/MD)
↓
Document Processor → Text Chunker → Embeddings
↓
ChromaDB Vector Store
↓
├─→ MCP Server (FastMCP) → Claude Desktop
└─→ Streamlit UI → Monitoring/Testing
✨ Features
- Semantic Search: Natural language queries across your entire library
- Multi-Format Support: PDF, DOCX, HTML, Markdown, TXT
- Fast Retrieval: <500ms query latency on 1000+ chunks
- MCP Integration: Works with Claude Desktop, VS Code, and any MCP client
- Metadata Extraction: Automatically extracts titles, authors, keywords
- Query Logging: Track usage and performance metrics
- Streamlit Dashboard: Upload, search, and visualize metrics
📊 Performance Metrics
| Metric | Target | Actual |
|---|---|---|
| Retrieval Accuracy | 85% | See METRICS.md |
| Query Latency | <500ms | See METRICS.md |
| Scale | 10k+ chunks | 1782+ chunks |
🚀 Installation
Prerequisites
- Python 3.11+
- 2GB RAM minimum
- Git
Setup
# Clone repository
git clone https://github.com/yourusername/research-assistant-mcp.git
cd research-assistant-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install local embeddings
pip install sentence-transformers
# Configure environment
cp .env.example .env
# Edit .env - add OPENAI_API_KEY if using OpenAI embeddings
Download Sample Data
# Download 25 AI/ML papers from arXiv
python scripts/download_sample_papers.py --count 25
Index Documents
# Index sample papers
python scripts/index_docs.py --folder ./sample_papers
# Or index your own documents
python scripts/index_docs.py --folder /path/to/your/papers --recursive
📖 Usage
Start MCP Server
python mcp_server/server.py
Configure Claude Desktop
Add to claude_desktop_config.json:
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"research-assistant": {
"command": "python",
"args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"],
"env": {}
}
}
}
Restart Claude Desktop.
Launch Streamlit UI
streamlit run ui/app.py
Opens at http://localhost:8501
🛠️ MCP Tools
search_documents
Semantic search across your library.
Query: "What are the challenges in RAG systems?"
Returns: Top-k results with sources, scores, and metadata
get_document_summary
Get quick overview of a document.
Input: Document path or title
Returns: Title, author, keywords, preview
find_related_papers
Find documents similar to a topic.
Query: "prompt engineering techniques"
Returns: Related papers with relevance scores
📁 Project Structure
research-assistant-mcp/
├── mcp_server/ # MCP server implementation
│ └── server.py
├── rag_pipeline/ # RAG components
│ ├── config.py
│ ├── document_processor.py
│ ├── chunker.py
│ ├── vector_store.py
│ ├── retriever.py
│ └── metadata_extractor.py
├── ui/ # Streamlit dashboard
│ ├── app.py
│ └── pages/
├── scripts/ # CLI utilities
│ ├── index_docs.py
│ └── download_sample_papers.py
├── tests/ # Testing & benchmarks
│ ├── sample_queries.json
│ └── benchmark_performance.py
├── data/ # Data storage
│ ├── chroma_db/
│ └── query_logs/
└── docs/ # Documentation
└── METRICS.md
🧪 Testing
# Run performance benchmarks
python tests/benchmark_performance.py
# Output: Accuracy, latency, scale metrics
🐳 Docker Deployment
# Build and run
docker-compose up -d
# Access UI at http://localhost:8501
# MCP server runs on localhost:8000
📈 Example Queries
-
Cross-document synthesis
"Compare different fine-tuning approaches for LLMs" -
Concept exploration
"How does RLHF improve model alignment?" -
Technical details
"Explain transformer attention mechanisms" -
Literature review
"What are recent developments in RAG systems?"
🔧 Customization
Change Embedding Model
Edit .env:
# OpenAI (paid, best quality)
EMBEDDING_MODEL=text-embedding-3-small
# Or use local (free) by default - already configured
Adjust Chunk Size
Edit .env:
CHUNK_SIZE=1000 # Characters per chunk
CHUNK_OVERLAP=200 # Overlap between chunks
Add Document Types
Edit rag_pipeline/document_processor.py to add new file type handlers.
🐛 Troubleshooting
ChromaDB errors: Delete data/chroma_db and re-index
Import errors: Verify pip install -r requirements.txt completed
UI blank: Check browser console, try Chrome/Firefox
Slow queries: Reduce TOP_K_RESULTS in .env
🚧 Future Enhancements
- [ ] Auto-watch folder for new documents
- [ ] Cross-encoder reranking for better accuracy
- [ ] Multi-modal support (images, diagrams)
- [ ] Citation network graph
- [ ] Export to Notion/Obsidian
- [ ] Web interface (FastAPI + React)
🎥 Demo Video
[Link to 2-minute demo video - Coming soon]
🤝 Contributing
Contributions welcome! Please open issues or PRs.
📄 License
MIT License - see LICENSE
🙏 Acknowledgments
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。