Agile Team MCP Server
Enables users to leverage a team of specialized AI agent personas like Business Analysts and Product Managers through a unified interface across multiple LLM providers. It streamlines agile development workflows by providing tools for model discovery, unified prompt delivery, and role-specific decision making.
README
Agile Team MCP Server
A team of Agent Personas wrapped in an MCP server that has the ability to leverage at scale massive compute by wrapping various LLM providers to perform activities as an Agile Team Persona.
Features
- Model Wrapping: Send prompts to multiple LLM models with a unified interface
- Provider/Model Correction: Automatically correct and validate provider and model names
- File Support: Send prompts from files and save responses to files
- Provider/Model Discovery: List available providers and models
- Persona Tools: Specialized personas like Business Analyst, Product Manager, Spec Writer, and Team Decision Maker
Setup
Installation
# Clone and install
git clone https://github.com/danielscholl/agile-team-mcp-server.git
cd agile-team-mcp-server
uv sync
# Install
uv pip install -e .
# Run tests to verify installation
uv run pytest
Environment Configuration
Create and edit your .env file with your API keys:
# Create environment file from template
cp .env.sample .env
Required API keys in your .env file:
# Required API keys
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here # For Google Gemini models
GROQ_API_KEY=your_groq_api_key_here
DEEPSEEK_API_KEY=your_deepseek_api_key_here
OLLAMA_HOST=http://localhost:11434
# Optional model configuration
DEFAULT_MODEL=openai:gpt-4o-mini
DEFAULT_TEAM_MODELS=["openai:gpt-4.1","anthropic:claude-3-7-sonnet","gemini:gemini-2.5-pro"]
DEFAULT_DECISION_MAKER_MODEL=openai:gpt-4o-mini
MCP Server Configuration
To utilize this MCP server directly in other projects either use the buttons to install in VSCode, edit the .mcp.json file directory.
Clients tend to have slighty different configurations
Configure for Claude.app
{
"mcpServers": {
"agile-team": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/danielscholl/agile-team-mcp-server@main",
"agile-team"
],
"env": {
"OPENAI_API_KEY": "<YOUR_OPENAI_KEY>",
"ANTHROPIC_API_KEY": "<YOUR_ANTHROPIC_KEY>",
"GEMINI_API_KEY": "<YOUR_GEMINI_KEY>",
"GROQ_API_KEY": "<YOUR_GROQ_KEY>",
"DEEPSEEK_API_KEY": "<YOUR_DEEPSEEK_KEY>",
"OLLAMA_HOST": "http://localhost:11434",
"DEFAULT_MODEL": "openai:gpt-4o-mini",
"DEFAULT_TEAM_MODELS": "[\"openai:gpt-4.1\",\"anthropic:claude-3-7-sonnet\",\"gemini:gemini-2.5-pro\"]",
"DEFAULT_DECISION_MAKER_MODEL": "openai:gpt-4o-mini"
}
}
}
}
Configure for Claude.code
Setting up Agile Team with Claude Code easily by importing it.
claude mcp add-from-claude-desktop
Note: "--directory" would be the path to the source code if not in the same directory.
# Copy this JSON configuration
{
"command": "uvx",
"args": ["--from", "git+https://github.com/danielscholl/agile-team-mcp-server@main", "agile-team"],
"env": {
"DEFAULT_MODEL": "openai:gpt-4o-mini",
"DEFAULT_TEAM_MODELS": "[\"openai:gpt-4.1\",\"anthropic:claude-3-7-sonnet\",\"gemini:gemini-2.5-pro\"]",
"DEFAULT_DECISION_MAKER_MODEL": "openai:gpt-4o-mini"
}
}
# Then run this command in Claude Code
claude mcp add agile-team "$(pbpaste)"
To remove the configuration later:
claude mcp remove agile-team
Available LLM Providers
| Provider | Short Prefix | Full Prefix | Example Usage |
|---|---|---|---|
| OpenAI | o |
openai |
o:gpt-4o-mini |
| Anthropic | a |
anthropic |
a:claude-3-5-haiku |
| Google Gemini | g |
gemini |
g:gemini-2.5-pro-exp-03-25 |
| Groq | q |
groq |
q:llama-3.1-70b-versatile |
| DeepSeek | d |
deepseek |
d:deepseek-coder |
| Ollama | l |
ollama |
l:llama3.1 |
Usage
Command Line
Run the server directly:
uv run agile-team
With MCP Client
With a compatible MCP client, you can connect to the server:
mcp use agile-team
Available Prompts
Interactive conversation starters and guided workflows to help you discover and use server capabilities.
List MCP Assets
Get a comprehensive overview of all server capabilities including tools, personas, providers, and workflows.
Parameters: None required
Usage:
# Get complete server capability overview
list_mcp_assets
Returns: Comprehensive markdown documentation including:
- All available tools with parameters and examples
- Supported LLM providers with shortcuts and usage examples
- Agent personas (Business Analyst, Product Manager, Spec Writer, Decision Maker)
- Quick start workflows for agile team processes
- Advanced usage patterns and best practices
- Pro tips for model selection and workflow optimization
This prompt provides a self-documenting overview of the entire agile-team MCP server, making it easy to discover capabilities and get started with productive workflows.
Available Tools
List Available Options
Tools to discover available LLM providers and their supported models.
List Providers Tool
Lists all supported LLM providers and their shortcut prefixes.
Parameters: None required
Examples:
# Simple example
list_providers_tool
List Models Tool
Lists all available models for a specific provider.
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
provider |
The provider to list models for (e.g., "openai", "anthropic") | required |
Examples:
# Simple example with full provider name
list_models_tool: "openai"
# Using provider shortcode
list_models_tool: "a" # Lists Anthropic models
Send Prompts to Models
Send text prompts directly to LLM models and get their responses.
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
text |
The prompt text to send to the models | required |
models_prefixed_by_provider |
List of models in format "provider:model" | openai:gpt-4o-mini |
Features:
- Send prompts to one or multiple models simultaneously
- Use model suffixes for special behaviors:
:4kor other numbers for thinking token budgets:highfor increased reasoning effort (OpenAI only)
Examples:
# Simple example
prompt_tool: "Create a plan for implementing user authentication"
# Complex example with multiple models and options
prompt_tool: "Analyze the trade-offs between microservices and monoliths" ["openai:gpt-4.1:high", "anthropic:claude-3-7-sonnet:4k"]
Work with Files
Process prompts from files and save responses to files for batch processing.
From File Tool
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
file_path |
Path to the file containing the prompt | required |
models_prefixed_by_provider |
List of models in format "provider:model" | openai:gpt-4o-mini |
Examples:
# Simple example
prompt_from_file_tool: "prompts/function.md"
# Complex example with specific model
prompt_from_file_tool: "prompts/function.md" ["anthropic:claude-3-7-sonnet-20250219"]
From File to File Tool
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
file_path |
Path to the file containing the prompt | required |
models_prefixed_by_provider |
List of models in format "provider:model" | openai:gpt-4o-mini |
output_path |
Full path for the output file | Generated based on input |
output_dir |
Directory for response files | input file's directory/responses |
output_extension |
File extension for output files | md |
Examples:
# Simple example
prompt_from_file2file_tool: "prompts/uv_script.md"
# Complex example with specific model, output path and custom extension
prompt_from_file2file_tool: "prompts/diagram.md" ["anthropic:claude-3-7-sonnet"] "prompts/responses/architecture_diagram.md"
Team Decision Making
Use multiple models as team members to generate different solutions, then have a decision maker model evaluate and choose the best approach.
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
from_file |
Path to the file containing the prompt | required |
models_prefixed_by_provider |
List of team member models | ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"] |
persona_dm_model |
Model for making the decision | openai:gpt-4o-mini |
output_path |
Full path for the output document | Generated based on input |
output_dir |
Directory for response files | input file's directory/responses |
output_extension |
File extension for output files | md |
persona_prompt |
Custom decision maker prompt | Default template |
Examples:
# Simple example
persona_dm_tool: "prompts/decision.md"
# Complex example with custom team and decision maker model
persona_dm_tool: "prompts/decision.md" ["o:gpt-4.1", "a:claude-3-7-sonnet", "g:gemini-2.5-pro-preview-03-25"] persona_dm_model="o:o3" "prompts/responses/final_decision.md"
Business Analyst Persona
Generate detailed business analysis using a specialized Business Analyst persona, with optional team-based decision making.
Capabilities:
- Creating detailed project briefs and requirement documents
- Analyzing business needs and market opportunities
- Defining MVP scope and feature prioritization
- Identifying target audiences and user personas
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
from_file |
Path to the file containing business requirements | required |
models_prefixed_by_provider |
Models to use in format "provider:model" | openai:gpt-4o-mini |
output_path |
Full path for the output document | Generated based on input |
output_dir |
Directory for response files | input file's directory/responses |
output_extension |
File extension for output files | md |
use_decision_maker |
Whether to use team decision making | false |
decision_maker_models |
Models for team members if using decision maker | ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"] |
decision_maker_model |
Model for final decision making | openai:gpt-4o-mini |
Examples:
# Simple example
persona_ba_tool: "prompts/concept.md" "prompts/responses/project-brief.md"
# Complex example with team-based decision making
persona_ba_tool: "prompts/concept.md" use_decision_maker=true decision_maker_model="o:04-mini" "prompts/responses/project-brief.md"
Product Manager Persona
Generate comprehensive product management plans using a specialized Product Manager persona, with optional team-based decision making.
Capabilities:
- Creating detailed product plans with prioritized features and clear timelines
- Developing product vision and strategy
- Performing market and competitive analysis
- Defining user stories and requirements
- Managing cross-functional team collaboration
- Implementing data-driven decision making
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
from_file |
Path to the file containing the product requirements | required |
models_prefixed_by_provider |
Models to use in format "provider:model" | openai:gpt-4o-mini |
output_path |
Full path for the output document | Generated based on input |
output_dir |
Directory for response files | input file's directory/responses |
output_extension |
File extension for output files | md |
use_decision_maker |
Whether to use team decision making | false |
decision_maker_models |
Models for team members if using decision maker | ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"] |
decision_maker_model |
Model for final decision making | openai:gpt-4o-mini |
pm_prompt |
Custom Product Manager prompt template | Default template |
decision_maker_prompt |
Custom decision maker prompt template | Default template |
Examples:
# Simple example
persona_pm_tool: "prompts/responses/project-brief.md" "prompts/responses/project-prd.md"
# Complex example with team-based decision making
persona_pm_tool: "prompts/responses/project-brief.md" use_decision_maker=true decision_maker_model="o:gpt-4o-mini" "prompts/responses/project-prd.md"
Spec Writer Persona
Generate clear, developer-ready specification documents from PRDs, project briefs, or user requests using a specialized Spec Writer persona.
Capabilities:
- Producing technical specifications from PRDs or project briefs
- Defining step-by-step implementation instructions for developers and AI agents
- Creating comprehensive specifications with architectural patterns and validation criteria
- Defining tool behavior, CLI structure, directory layout, and testing plans
- Using focused, reproducible examples to communicate architectural patterns
- Ensuring each spec includes validation steps to verify implementation
Parameters:
| Parameter | Description | Default Value |
|---|---|---|
from_file |
Path to the file containing requirements or PRD | required |
models_prefixed_by_provider |
Models to use in format "provider:model" | openai:gpt-4o-mini |
output_path |
Full path for the output document | Generated based on input |
output_dir |
Directory for response files | input file's directory/responses |
output_extension |
File extension for output files | md |
use_decision_maker |
Whether to use team decision making | false |
decision_maker_models |
Models for team members if using decision maker | ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"] |
decision_maker_model |
Model for final decision making | openai:gpt-4o-mini |
sw_prompt |
Custom Spec Writer prompt template | Default template |
decision_maker_prompt |
Custom decision maker prompt template | Default template |
Examples:
# Simple example - generate a specification from a PRD
persona_sw_tool: "prompts/responses/project-prd.md" "prompts/responses/project-spec.md"
# Complex example with team-based decision making
persona_sw_tool: "prompts/responses/project-prd.md" use_decision_maker=true decision_maker_model=["o:gpt-4o-mini"] "prompts/responses/project-spec.md"
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。