Reddit MCP Server
Enables comprehensive Reddit research through a three-layer architecture that discovers relevant communities, provides operation guidance, and executes multi-subreddit content fetching with proper citations. Supports searching, post fetching, comment analysis, and batch operations across multiple subreddits for thorough content analysis.
README
Reddit MCP Server
A Model Context Protocol (MCP) server that provides LLMs with comprehensive access to Reddit content through a three-layer architecture designed for thorough research and analysis. Built with FastMCP and PRAW for efficient deployment.
✨ Three-Layer Architecture
This server features a unique three-layer architecture that guides LLMs through comprehensive Reddit research:
Layer 1: Discovery (discover_reddit_resources)
- Finds 8-15 relevant communities using multiple search strategies
- Supports both "quick" and "comprehensive" discovery modes
- Returns available operations and recommended workflows
Layer 2: Requirements (get_operation_requirements)
- Provides detailed parameter schemas and validation rules
- Context-aware suggestions based on your research needs
- Clear guidance on when to use each operation
Layer 3: Execution (execute_reddit_operation)
- Validates parameters and executes Reddit operations
- Comprehensive error handling with actionable hints
- Returns structured results with detailed metadata
Key Features
- Multi-Community Coverage: Discover and fetch from 8-15 subreddits in one workflow
- Intelligent Discovery: Uses multiple search strategies for comprehensive coverage
- Citation Support: Includes Reddit URLs in all results for proper attribution
- Efficiency Optimized: Batch operations reduce API calls by 70%+
- Research-Focused: Designed for thorough analysis with comment depth
- MCP Resources: Access popular subreddits, subreddit info, and server capabilities
Quick Start
Prerequisites
- Python 3.11+
- Reddit API credentials (Get them here)
- Go to https://www.reddit.com/prefs/apps
- Click "Create App" or "Create Another App"
- Choose "script" as the app type
- Note your
client_id(under "personal use script") andclient_secret
Installation
- Clone the repository:
git clone <repository-url>
cd reddit-mcp-poc
- Install dependencies using uv:
pip install uv
uv sync
Configuration
Create a .env file in the project root:
REDDIT_CLIENT_ID=your_client_id_here
REDDIT_CLIENT_SECRET=your_client_secret_here
REDDIT_USER_AGENT=RedditMCP/1.0 by u/your_username
Running the Server
Production Mode
uv run src/server.py
Development Mode (with MCP Inspector)
fastmcp dev src/server.py
The server will start and be ready to accept MCP connections.
Claude Code Integration
To use this Reddit MCP server with Claude Code, follow these steps to add it to your MCP configuration:
Prerequisites
- Ensure you have
uvinstalled and the server is working locally - Test that the server starts correctly by running
uv run src/server.pyin your project directory
Installation Steps
Important: Replace <PATH_TO_YOUR_PROJECT> with the absolute path to your project directory.
-
Add the MCP server to Claude Code:
claude mcp add -s user -t stdio reddit-mcp-poc uv run fastmcp run <PATH_TO_YOUR_PROJECT>/reddit-mcp-poc/src/server.pyExample paths by platform:
- macOS/Linux:
/home/username/projects/reddit-mcp-poc/src/server.py - Windows:
C:\Users\username\projects\reddit-mcp-poc\src\server.py
- macOS/Linux:
-
Verify the installation:
claude mcp listYou should see
reddit-mcp-poclisted with a ✓ Connected status.
Troubleshooting
If you see a "Failed to connect" status:
- Check that the path to your
server.pyfile is correct and complete - Ensure there are no line breaks or truncation in the command path
- Remove and re-add the server if the path was truncated:
claude mcp remove -s user reddit-mcp-poc claude mcp add -s user -t stdio reddit-mcp-poc uv run fastmcp run <FULL_PATH_TO_SERVER.PY>
Common Issues:
- Path truncation: Make sure to copy the full path without any line breaks
- Command not found: Verify that
uvis installed and accessible in your PATH - Server not starting: Test the command
uv run src/server.pydirectly in terminal first before adding to Claude Code
Configuration Details:
- Scope: User-level configuration (
-s user) - Transport: STDIO (
-t stdio) - Server Name:
reddit-mcp-poc
🚀 Recommended Workflow for Comprehensive Research
For the best results, follow this workflow that leverages all three layers:
# 1. DISCOVERY - Find relevant communities
discover_reddit_resources(
topic="machine learning ethics",
discovery_depth="comprehensive"
)
# 2. REQUIREMENTS - Get parameter guidance (if needed)
get_operation_requirements("fetch_multiple", context="ML ethics discussion")
# 3. EXECUTION - Fetch from multiple communities
execute_reddit_operation("fetch_multiple", {
"subreddit_names": ["MachineLearning", "artificial", "singularity", "ethics"],
"limit_per_subreddit": 8
})
# 4. DEEP DIVE - Get comments for promising posts
execute_reddit_operation("fetch_comments", {
"submission_id": "abc123",
"comment_limit": 100
})
Why This Works:
- 📊 60% better coverage than single-subreddit approaches
- 🔗 Proper citations with Reddit URLs included automatically
- ⚡ 70% fewer API calls through intelligent batching
- 📝 Research-ready with comprehensive comment analysis
Available Operations
The server provides access to Reddit through these operations via execute_reddit_operation:
Core Operations
| Operation | Description | Best For |
|---|---|---|
search_all |
Search across ALL of Reddit | Broad topic exploration |
search_subreddit |
Search within specific subreddit | Targeted community search |
fetch_posts |
Get latest posts from subreddit | Current trends/activity |
fetch_multiple |
⚡ Batch fetch from multiple subreddits | Multi-community research |
fetch_comments |
Get post with full discussion | Deep analysis of conversations |
Three-Layer Architecture Tools
| Tool | Purpose | When to Use |
|---|---|---|
discover_reddit_resources |
Find relevant communities & operations | ALWAYS START HERE |
get_operation_requirements |
Get detailed parameter schemas | Before complex operations |
execute_reddit_operation |
Execute any Reddit operation | After getting requirements |
MCP Resources
The server provides three MCP resources for accessing commonly used data:
1. reddit://popular-subreddits
Returns a list of the 25 most popular subreddits with subscriber counts and descriptions.
2. reddit://subreddit/{name}/about
Get detailed information about a specific subreddit including:
- Title and description
- Subscriber count and active users
- Subreddit rules
- Creation date and other metadata
3. reddit://server-info
Returns comprehensive information about the MCP server including:
- Available tools and resources
- Version information
- Usage examples
- Current rate limit status
Usage Examples
🎯 Three-Layer Architecture Workflow
# RECOMMENDED: Full research workflow
# Step 1: Discover communities
result = discover_reddit_resources(
topic="sustainable technology",
discovery_depth="comprehensive"
)
# Returns: 8-15 relevant subreddits + recommended operations
# Step 2: Get operation requirements (optional)
schema = get_operation_requirements("fetch_multiple")
# Returns: Parameter schemas, suggestions, common mistakes
# Step 3: Execute with discovered communities
posts = execute_reddit_operation("fetch_multiple", {
"subreddit_names": result["relevant_communities"]["subreddits"][:8],
"listing_type": "hot",
"limit_per_subreddit": 6
})
# Step 4: Deep dive into promising discussions
comments = execute_reddit_operation("fetch_comments", {
"submission_id": "interesting_post_id",
"comment_limit": 100
})
⚡ Quick Operations
# Search across all Reddit
execute_reddit_operation("search_all", {
"query": "artificial intelligence ethics",
"sort": "top",
"time_filter": "week",
"limit": 15
})
# Search within specific subreddit
execute_reddit_operation("search_subreddit", {
"subreddit_name": "MachineLearning",
"query": "transformer architecture",
"limit": 20
})
# Batch fetch from known subreddits (70% more efficient)
execute_reddit_operation("fetch_multiple", {
"subreddit_names": ["artificial", "singularity", "Futurology"],
"listing_type": "hot",
"limit_per_subreddit": 8
})
Testing
Run the test suite:
uv run pytest tests/
Project Structure
reddit-mcp-poc/
├── src/
│ ├── server.py # Main MCP server with three-layer architecture
│ ├── config.py # Reddit client configuration
│ ├── models.py # Pydantic data models
│ ├── resources.py # MCP resource implementations
│ └── tools/ # Tool implementations
│ ├── search.py # Search functionality (with permalink support)
│ ├── posts.py # Subreddit posts fetching
│ ├── comments.py # Comments fetching
│ └── discover.py # Subreddit discovery
├── tests/
│ └── test_tools.py # Unit tests
├── pyproject.toml # Project dependencies
├── .env # Your API credentials
└── README.md # This file
Error Handling
The server handles common Reddit API errors gracefully:
- Rate Limiting: Automatically handled by PRAW with 5-minute cooldown
- Not Found: Returns error message for non-existent subreddits/posts
- Forbidden: Returns error message for private/restricted content
- Invalid Input: Validates and sanitizes all input parameters
Limitations
This MVP implementation has some intentional limitations:
- Read-only access (no posting, commenting, or voting)
- No user authentication (uses application-only auth)
- Limited comment expansion (doesn't fetch "more comments")
- No caching (each request hits Reddit API directly)
Next Steps
Building on the three-layer architecture foundation:
- Enhanced LLM Guidance: Improve
get_operation_requirementswith richer context-aware suggestions - Advanced Analytics: Add sentiment analysis and trend detection to discovered communities
- Caching Layer: Implement intelligent caching for discovered communities and frequent queries
- User Authentication: Add write operations (posting, commenting) with proper auth
- Extended Discovery: Add time-based and activity-based community discovery modes
- Research Templates: Pre-configured workflows for common research patterns
- Citation Tools: Automated bibliography generation from Reddit URLs
Troubleshooting
| Issue | Solution |
|---|---|
| "Reddit API credentials not found" | Ensure .env file exists with valid credentials |
| Rate limit errors | Wait a few minutes; PRAW handles this automatically |
| "Subreddit not found" | Verify subreddit name (without r/ prefix) |
| No search results | Try broader search terms or different time filter |
| Import errors | Run uv sync to install all dependencies |
License
MIT
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。