
Meta Prompt MCP Server
A server that transforms a standard Language Model into a dynamic multi-agent system where the model simulates both a Conductor (project manager) and Experts (specialized agents) to tackle complex problems through a collaborative workflow.
Tools
ready_to_answer
Use this tool when you already obtained or verified the final solution with at least two independent experts and are ready to present your final answer.
expert_model
Use this tool to communicate with an expert. Args: name: The name of the expert to communicate with. Required. instructions: The instructions to send to the expert. Required. output: The answer from the expert based on the instructions. Required. iteration: The number of experts you have consulted so far. Start with 1.
README
Meta Prompt MCP
This project is an implementation of the Meta-Prompting technique from the paper "Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding".
At its core, this MCP transforms a standard Language Model (LM) into a dynamic, multi-agent system without the complex setup. It works by having the LM adopt two key roles:
- The Conductor: A high-level project manager that analyzes a complex problem, breaks it down into smaller, logical subtasks, and delegates them.
- The Expert: Specialized agents (e.g., "Python Programmer," "Code Reviewer," "Creative Writer") that are "consulted" by the Conductor to execute each subtask.
The magic is that this entire collaborative workflow is simulated within a single LM. The Conductor and Experts are different modes of operation guided by a sophisticated system prompt, allowing the model to reason, act, and self-critique its way to a more robust and accurate solution. It's like having an automated team of AI specialists at your disposal, all powered by one model.
Demo
Getting Started
1. Clone the Repository
First, clone this repository to your local machine.
git clone https://github.com/tisu19021997/meta-prompt-mcp-server.git .
cd meta-prompt-mcp-server
2. Install uv
This project uses uv
, an extremely fast Python package manager from Astral. If you don't have it installed, you can do so with one of the following commands.
Note: use which uv
to know the path of your uv
installation.
macOS / Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows (PowerShell):
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
For more details, see the official uv
installation guide.
Usage
To use this Meta Prompt MCP server, you need to configure your client (e.g., Cursor, Claude Desktop) to connect to it. Make sure to replace the placeholder paths with the actual paths on your machine.
Cursor
Add the following configuration to your mcp.json
settings:
"meta-prompting": {
"command": "path/to/your/uv",
"args": [
"--directory",
"path/to/your/meta-prompt-mcp",
"run",
"mcp-meta-prompt"
]
}
Claude Desktop
Add the following configuration to your claude_desktop_config.json
settings:
"meta-prompting": {
"command": "path/to/your/uv",
"args": [
"--directory",
"path/to/your/meta-prompt-mcp",
"run",
"mcp-meta-prompt"
]
}
Activating the Meta-Prompt Workflow
Important: To leverage the full power of this MCP, always start your request by invoking the meta_model_prompt
(then fill in the query with your prompt, see Demo video) from the meta-prompting
server. This is the official entry point that activates the Conductor/Expert workflow. Once the prompt is added, simply provide your problem statement.
How it Differs from the Paper
The core methodology in the original paper involves a two-step process for expert consultation:
- The "conductor" model generates instructions for an expert.
- A separate, independent LM instance (the "expert") is invoked with only those instructions to provide a response. This ensures the expert has "fresh eyes."
This implementation simplifies the process into a single LLM call. The conductor model generates the expert's name, instructions, and the expert's complete output within a single tool call. This is a significant difference that makes the process faster and less expensive, but it deviates from the "fresh eyes" principle of the original research.
Limitations
The expert_model
tool in this MCP server is designed to use the ctx.sample()
function to properly simulate a second, independent expert model call as described in the paper. However, this function is not yet implemented in most MCP clients (such as Cursor and Claude Desktop).
Due to this limitation, the server includes a fallback mechanism. When ctx.sample()
is unavailable, the expert_model
tool simply returns the output
content that was generated by the conductor model in the tool call. This means the expert's response is part of the conductor's single generation, rather than a true, independent consultation.
Comparison
Below is two conversations I asked Claude to implement the Meta Prompt MCP Server itself, with and without Meta Prompt MCP.
Some artifacts are missing from the conversation but you could see the implementation with Meta Prompt MCP is much better, and it also did kind of "self-reviewing" by consulting a QA expert.
- Claude Conversations:
References
推荐服务器

Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。