Telegram MCP Server
Enables AI assistants to interact with users via Telegram to request decisions, approvals, or specific input through text and clickable buttons. This facilitates a human-in-the-loop workflow where the AI can pause for feedback or send status notifications during long-running tasks.
README
Telegram MCP Server
A Model Context Protocol (MCP) server that enables AI assistants (like Kilo Code) to ask you questions via Telegram and wait for your responses. This creates a "human-in-the-loop" workflow where the AI can request decisions, approvals, or specific input during long-running tasks.
Features
- 🤖 Interactive AI Workflow: AI can pause and ask you questions via Telegram
- 📱 Button Support: Present multiple choice options as clickable buttons
- ⏱️ Long Polling: Waits up to 2 minutes for your response
- 🔒 Secure: Uses environment variables for credentials
- 🎯 Simple Integration: Works with any MCP-compatible AI assistant
- 🐳 Docker Support: Run natively or in a container
Prerequisites
- Python 3.10+ (for native installation)
- OR Docker and Docker Compose (for containerized installation)
- A Telegram account
- A Telegram Bot Token (from @BotFather)
Installation Methods
You can run this MCP server in two ways:
- Native Python Installation - Run directly on your system
- Docker Installation - Run in a container (recommended for production)
Native Installation
1. Create a Telegram Bot
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Name your bot (e.g., "MyDevBot")
- Copy the HTTP API Token provided
- Important: Send
/startto your new bot so it can message you
2. Find Your Telegram User ID (Optional)
- Search for @userinfobot on Telegram
- Send it any message
- Copy your User ID
3. Clone and Setup
# Clone the repository
git clone https://github.com/yourusername/telegram-mcp-server.git
cd telegram-mcp-server
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate # On Linux/Mac
# OR
venv\Scripts\activate # On Windows
# Install dependencies
pip install -r requirements.txt
4. Configuration
Create a .env file from the sample:
cp .env.sample .env
Edit .env and add your credentials:
TELEGRAM_BOT_TOKEN=your_bot_token_here
TELEGRAM_USER_ID=your_user_id_here # Optional - bot will auto-detect if not set
⚠️ Security Note: The .env file is gitignored to protect your credentials.
5. Test the Server Locally
# Activate virtual environment
source venv/bin/activate
# Run the server
python telegram_server.py
You should see:
╭──────────────────────────────────────────────────────────────────────────────╮
│ ▄▀▀ ▄▀█ █▀▀ ▀█▀ █▀▄▀█ █▀▀ █▀█ │
│ █▀ █▀█ ▄▄█ █ █ ▀ █ █▄▄ █▀▀ │
│ FastMCP 2.14.2 │
│ 🖥 Server: Telegram Human Loop │
╰──────────────────────────────────────────────────────────────────────────────╯
Press Ctrl+C to stop the test.
Docker Installation
1. Configure Environment
cp .env.sample .env
# Edit .env with your credentials
2. Build and Run
# Build the Docker image
docker-compose build
# Start the container in detached mode
docker-compose up -d
3. View Logs
docker-compose logs -f telegram-mcp
4. Stop the Server
docker-compose down
Docker Commands Reference
Building:
# Build the image
docker-compose build
# Build without cache (force rebuild)
docker-compose build --no-cache
Running:
# Start in detached mode (background)
docker-compose up -d
# Start in foreground (see logs directly)
docker-compose up
# Start and rebuild if needed
docker-compose up -d --build
Monitoring:
# View logs
docker-compose logs -f telegram-mcp
# View last 100 lines of logs
docker-compose logs --tail=100 telegram-mcp
# Check container status
docker-compose ps
# Execute commands inside the container
docker-compose exec telegram-mcp python --version
Stopping and Cleaning:
# Stop the container
docker-compose stop
# Stop and remove containers
docker-compose down
# Stop, remove containers, and remove volumes
docker-compose down -v
# Remove all (containers, networks, images)
docker-compose down --rmi all
Restart:
# Restart the container
docker-compose restart
# Rebuild and restart from scratch
docker-compose down
docker-compose build --no-cache
docker-compose up -d
Development:
# Open a shell in the running container
docker-compose exec telegram-mcp /bin/bash
# Test the server locally (without Docker)
source venv/bin/activate && python telegram_server.py
Connecting to MCP Clients
For Kilo Code (or similar MCP clients)
Method 1: Using the Startup Script (Recommended)
-
Open your MCP settings file:
- Location:
~/.config/Code/User/globalStorage/rooveterinaryinc.roo-cline/settings/mcpSettings.json - Or use your client's UI: Settings → MCP Servers → Edit Configuration
- Location:
-
Add this configuration (replace
/absolute/path/to/telegram-mcp-serverwith your actual installation path):
{
"mcpServers": {
"telegram": {
"command": "bash",
"args": ["/absolute/path/to/telegram-mcp-server/run.sh"]
}
}
}
Example: If you cloned to /home/user/projects/telegram-mcp-server, use:
{
"mcpServers": {
"telegram": {
"command": "bash",
"args": ["/home/user/projects/telegram-mcp-server/run.sh"]
}
}
}
- Restart your MCP client or reload the MCP servers
Note: An example configuration is provided in mcp-config.example.json for reference.
Method 2: Direct Python Execution
{
"mcpServers": {
"telegram": {
"command": "/absolute/path/to/telegram-mcp-server/venv/bin/python",
"args": ["/absolute/path/to/telegram-mcp-server/telegram_server.py"]
}
}
}
Method 3: Using Docker
Option A: Docker Compose Exec
{
"mcpServers": {
"telegram": {
"command": "docker-compose",
"args": [
"-f",
"/absolute/path/to/telegram-mcp-server/docker-compose.yml",
"exec",
"-T",
"telegram-mcp",
"python",
"telegram_server.py"
],
"cwd": "/absolute/path/to/telegram-mcp-server"
}
}
}
Option B: Docker Run
{
"mcpServers": {
"telegram": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--env-file",
"/absolute/path/to/telegram-mcp-server/.env",
"telegram-mcp-server",
"python",
"telegram_server.py"
]
}
}
}
Available Tools
Once connected, your AI assistant will have access to these tools:
-
ask_human(question, options, wait, timeout_seconds, allow_custom) - Send questions to Telegram
wait=True(default): Blocks until you respond or timeoutwait=False: Sends question and returns immediately (non-blocking mode)options: Optional list of button choicesallow_custom=True(default): Adds "Custom answer" button when options provided- Works in both blocking and non-blocking modes
-
get_telegram_response(mark_as_read) - Retrieve your latest Telegram message
- Use after
ask_human(wait=False)to get your response mark_as_read=True(default): Won't retrieve the same message twice
- Use after
-
send_telegram_notification(message) - Send one-way status updates
- For progress reports, completion notifications, or status updates
- Doesn't expect a response
- Perfect for keeping you informed during long-running tasks
-
list_telegram_messages(limit) - View recent conversation history
- Shows last 5-20 messages for context
- Useful for checking if you've already replied
Usage Examples
Example 1: Simple Question
You say to your AI:
"I'm going to grab coffee. If you need to know which database to use, ask me via Telegram."
AI calls:
ask_human(question="Should I use PostgreSQL or MongoDB for this project?")
What happens:
- Your phone buzzes with a Telegram message
- You reply: "PostgreSQL"
- AI receives "PostgreSQL" and continues
Example 2: Multiple Choice with Buttons
AI calls:
ask_human(
question="How should I structure the authentication?",
options=["JWT", "Session Cookies", "OAuth2", "Skip for now"]
)
What happens:
- You receive a Telegram message with 4 clickable buttons
- You tap "JWT"
- AI receives "JWT" instantly
Example 3: Approval Workflow
You say to your AI:
"Refactor the entire codebase, but ask me before making any breaking changes."
AI calls:
ask_human(
question="I want to rename `getUserData()` to `fetchUser()`. This will break 23 files. Proceed?",
options=["Yes, proceed", "No, skip this", "Show me the files first"]
)
Example 4: Non-Blocking Mode for Complex Questions
AI asks without waiting:
ask_human(
question="Please review this 500-line refactor and provide detailed feedback",
wait=False
)
What happens:
- You receive the question on Telegram
- You take your time to review (no timeout)
- When ready, you tell the AI: "I've answered on Telegram"
- AI retrieves your answer:
get_telegram_response()
Example 5: Progress Notifications
AI sends status updates:
send_telegram_notification("🚀 Starting database migration...")
send_telegram_notification("✅ Step 1/5 complete: Schema created")
send_telegram_notification("🎉 Migration complete! All 1,247 records migrated successfully.")
How It Works
- AI calls
ask_human()with a question and optional button choices - Server sends Telegram message to your configured chat
- Long polling loop checks Telegram every 2 seconds for your response
- You respond via text or button click
- Server returns your answer to the AI
- AI continues with your input
Timeout Behavior
- Default timeout: 120 seconds (2 minutes)
- Configurable via the
timeout_secondsparameter inask_human() - If you don't respond in time, the AI receives:
"Timeout: User did not respond in time..."
Non-Blocking Mode
For complex questions that require extended thinking time, use non-blocking mode to avoid timeouts:
-
AI asks without waiting:
ask_human("Please review this architecture and provide feedback", wait=False) -
You receive the question and can take as long as needed to think and respond
-
When ready, tell the AI you've replied:
"I've answered on Telegram"
-
AI retrieves your answer:
answer = get_telegram_response()
Custom Answers with Buttons
When you provide button options, the AI automatically adds a "✏️ Custom answer" button (unless allow_custom=False). This lets you:
- Click a button for quick selection
- OR type your own custom response
Example:
ask_human(
"Which database?",
options=["PostgreSQL", "MongoDB", "MySQL"],
wait=False
)
You'll see 4 buttons:
- PostgreSQL
- MongoDB
- MySQL
- ✏️ Custom answer (type below)
Best Practices
Use Non-Blocking Mode (wait=False) when:
- Questions require code review or deep analysis
- You might be away from your device
- The decision requires research or consultation
- You need more than 2 minutes to respond
Use Blocking Mode (wait=True) when:
- Questions are simple yes/no decisions
- You're actively working with the AI
- Quick responses are expected
- Using button options for multiple choice
Use Notifications (send_telegram_notification) for:
- Progress updates during long-running tasks
- Task completion notifications
- Error or warning alerts
- Status updates that don't require a response
- Keeping yourself informed while away from the computer
Project Structure
telegram-mcp-server/
├── telegram_server.py # Main MCP server code
├── .env # Your credentials (gitignored)
├── .env.sample # Template for environment variables
├── .gitignore # Protects secrets
├── requirements.txt # Python dependencies
├── run.sh # Startup script (executable)
├── mcp-config.example.json # Example MCP configuration (customize for your setup)
├── Dockerfile # Docker image definition
├── docker-compose.yml # Docker Compose configuration
├── .dockerignore # Docker build exclusions
└── README.md # This file
Troubleshooting
"Command 'python' not found"
Solution: Use python3 instead, or install the symlink:
sudo apt install python-is-python3
The run.sh script already uses the correct Python from the virtual environment.
"Error: Could not find a Telegram Chat ID"
Solution: Make sure you've sent /start to your bot at least once.
"Permission denied: ./run.sh"
Solution: Make the script executable:
chmod +x run.sh
"Timeout: User did not respond in time"
Solutions:
- Respond faster (within 2 minutes)
- Increase timeout in the
ask_human()call - Use non-blocking mode (
wait=False) for complex questions
MCP Server Not Showing in Client
Solutions:
- Check the MCP settings file path is correct
- Verify the absolute path in your configuration
- Restart your MCP client completely
- Check client logs for connection errors
Docker Container Won't Start
Check logs:
docker-compose logs telegram-mcp
Environment Variables Not Loading
Ensure .env file exists and is properly formatted:
cat .env
Permission Issues with Docker
If you encounter permission issues, ensure the .env file is readable:
chmod 644 .env
Advanced Configuration
Changing the Default Timeout
You can customize the timeout per question:
# Short timeout for quick questions
ask_human("Proceed?", options=["Yes", "No"], timeout_seconds=30)
# Longer timeout for thoughtful questions
ask_human("Which approach?", timeout_seconds=300) # 5 minutes
Restricting to Specific User
The server uses TELEGRAM_USER_ID from .env. Only messages from this user ID will be accepted.
Using Multiple Bots
Create separate directories with different .env files and register each as a different MCP server:
{
"mcpServers": {
"telegram-work": {
"command": "bash",
"args": ["/path/to/work-bot/run.sh"]
},
"telegram-personal": {
"command": "bash",
"args": ["/path/to/personal-bot/run.sh"]
}
}
}
Security Best Practices
- ✅ Never commit
.env- Already in.gitignore - ✅ Use
TELEGRAM_USER_ID- Prevents unauthorized users from controlling your AI - ✅ Regenerate tokens if accidentally exposed
- ✅ Use private Telegram bots - Don't share your bot with others
- ✅ Use Docker secrets for sensitive data in production
- ✅ Regularly update the base Python image for security patches
Production Deployment
For production deployment:
- Remove development volume mounts from
docker-compose.yml - Use environment variables or secrets management instead of
.envfile - Consider using a process manager or orchestration tool (Kubernetes, Docker Swarm)
- Set up proper logging and monitoring
- Configure restart policies appropriately
Example production docker-compose.yml:
version: '3.8'
services:
telegram-mcp:
build:
context: .
dockerfile: Dockerfile
container_name: telegram-mcp-server
environment:
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
- TELEGRAM_USER_ID=${TELEGRAM_USER_ID}
- PYTHONUNBUFFERED=1
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Dependencies
- fastmcp (2.14.2+) - MCP server framework
- httpx (0.28.1+) - Async HTTP client for Telegram API
- python-dotenv (1.2.1+) - Environment variable management
See requirements.txt for full list.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - Use freely!
Support
For issues with:
- MCP Protocol: Check Model Context Protocol docs
- FastMCP: Visit FastMCP documentation
- Telegram Bot API: See Telegram Bot API docs
Changelog
v1.0.0 (2026-01-02)
- Initial release
- Support for blocking and non-blocking question modes
- Button support for multiple choice questions
- Progress notification system
- Docker support
- Comprehensive documentation
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。