
RAGmonsters Custom PostgreSQL MCP Server
A domain-specific MCP server that provides optimized API access to the RAGmonsters fictional monster dataset, enabling more efficient and secure interactions compared to generic SQL queries.
README
Custom PostgreSQL MCP Server for RAGmonsters
Overview
This repository demonstrates a more advanced approach to integrating Large Language Models (LLMs) with databases using the Model Context Protocol (MCP). While generic MCP PostgreSQL servers allow LLMs to explore databases through raw SQL queries, this project takes a different approach by creating a custom MCP server that provides a domain-specific API tailored to the application's needs.
This implementation uses FastMCP, a high-performance implementation of the Model Context Protocol, which provides improved efficiency and reliability for tool-based interactions with LLMs.
This project uses the RAGmonsters dataset as its foundation. RAGmonsters is an open-source project that provides a rich, fictional dataset of monsters with various attributes, abilities, and relationships - specifically designed for demonstrating and testing Retrieval-Augmented Generation (RAG) systems.
The Problem with Generic MCP Database Access
Generic MCP PostgreSQL servers provide LLMs with a query
tool that allows them to:
- Explore database schemas
- Formulate SQL queries based on natural language questions
- Execute those queries against the database
While this approach works, it has several limitations for real-world applications:
- Cognitive Load: The LLM must understand the entire database schema
- Inefficiency: Multiple SQL queries are often needed to answer a single question
- Security Concerns: Raw SQL access requires careful prompt engineering to prevent injection attacks
- Performance: Complex queries may be inefficient if the LLM doesn't understand the database's indexing strategy
- Domain Knowledge Gap: The LLM lacks understanding of business rules and domain-specific constraints
About RAGmonsters Dataset
RAGmonsters is an open dataset specifically designed for testing and demonstrating Retrieval-Augmented Generation (RAG) systems. It contains information about fictional monsters with rich attributes, abilities, and relationships - making it perfect for natural language querying demonstrations.
The PostgreSQL version of RAGmonsters provides a well-structured relational database with multiple tables and relationships, including:
- Monsters with various attributes (attack power, defense, health, etc.)
- Abilities that monsters can possess
- Elements (fire, water, earth, etc.) with complex relationships
- Habitats where monsters can be found
- Evolution chains and relationships between monsters
This rich, interconnected dataset is ideal for demonstrating the power of domain-specific APIs versus generic SQL access.
Our Solution: Domain-Specific MCP API
This project demonstrates how to build a custom MCP server that provides a higher-level, domain-specific API for the RAGmonsters dataset. Instead of exposing raw SQL capabilities, our MCP server offers purpose-built functions that:
- Abstract Database Complexity: Hide the underlying schema and SQL details
- Provide Domain-Specific Operations: Offer functions that align with business concepts
- Optimize for Common Queries: Implement efficient query patterns for frequently asked questions
- Enforce Business Rules: Embed domain-specific logic and constraints
- Improve Security: Limit the attack surface by removing direct SQL access
Example: Domain-Specific API vs. Generic SQL
Generic MCP PostgreSQL Approach:
User: "What are the top 3 monsters with the highest attack power that are vulnerable to fire?"
LLM: (Must understand schema, joins, and SQL syntax)
1. First query to understand the schema
2. Second query to find monsters with attack power
3. Third query to find vulnerabilities
4. Final query to join and filter results
Our Custom MCP Server Approach:
User: "What are the top 3 monsters with the highest attack power that are vulnerable to fire?"
LLM: (Uses our domain-specific API)
1. Single call: getMonsters({ vulnerableTo: "fire", sortBy: "attackPower", limit: 3 })
Project Structure
├── .env.example # Example environment variables
├── package.json # Node.js project configuration
├── README.md # This documentation
├── img/ # Images for documentation
├── scripts/
│ ├── testMcpServer.js # Test script for the MCP server
│ └── testLogger.js # Logger for test script
├── src/
│ ├── index.js # Main application server
│ ├── mcp-server/ # Custom MCP server implementation with FastMCP
│ │ ├── index.js # Server entry point
│ │ ├── tools/ # Domain-specific tools
│ │ │ ├── index.js # Tool registration
│ │ │ └── monsters.js # Monster-related operations
│ │ └── utils/ # Helper utilities
│ │ └── logger.js # Logging functionality
│ ├── llm.js # LangChain integration for LLM
│ └── public/ # Web interface files
│ └── index.html # Chat interface
Features
- Custom MCP Server with FastMCP: High-performance domain-specific API for RAGmonsters data
- Optimized Queries: Pre-built efficient database operations
- Business Logic Layer: Domain rules and constraints embedded in the API
- Structured Response Format: Consistent JSON responses for LLM consumption
- Comprehensive Logging: Detailed logging for debugging and monitoring
- Test Suite: Scripts to verify server functionality
Planned Features
- LangChain.js Integration: For LLM interactions
- Web Interface: Simple chat interface to interact with the data
- Deployment on Clever Cloud: Easy deployment instructions
Benefits of This Approach
- Improved Performance: Optimized queries and caching strategies
- Better User Experience: More accurate and faster responses
- Reduced Token Usage: LLM doesn't need to process complex SQL or schema information
- Enhanced Security: No direct SQL access means reduced risk of injection attacks
- Maintainability: Changes to the database schema don't require retraining the LLM
- Scalability: Can handle larger and more complex databases
Getting Started
Installation
- Clone this repository
- Install dependencies:
npm install
- Copy
.env.example
to.env
and configure your PostgreSQL connection string - Run the test script:
node scripts/testMcpServer.js
Available Tools
The MCP server provides the following tools:
-
getMonsters - Get a list of monsters with optional filtering, sorting, and pagination
- Parameters: filters (category, habitat, rarity), sort (field, direction), limit, offset
-
getMonsterById - Get detailed information about a specific monster by ID
- Parameters: monsterId
-
add - Simple utility to add two numbers (for testing)
- Parameters: a, b
Prerequisites
- Node.js 23 or later
- PostgreSQL database with RAGmonsters data
- Access to an LLM API (e.g., OpenAI)
- FastMCP package (included in dependencies)
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- RAGmonsters for the sample dataset
- Model Context Protocol for the MCP specification
- FastMCP for the high-performance MCP implementation
- Clever Cloud for hosting capabilities
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。