Telegram Bridge MCP
Bridges AI assistants to a Telegram bot to enable two-way messaging, interactive confirmations, and live status updates. It supports automatic voice transcription via local Whisper models and provides secure, single-user communication for MCP-compatible hosts.
README
Telegram Bridge MCP
Unblock your agent workflow through Telegram

A Model Context Protocol server that bridges AI assistants to a Telegram bot — enabling two-way messaging, interactive confirmations, live status updates, and automatic voice transcription.
Works with any MCP-compatible AI host: VS Code Copilot, Claude Desktop, and others.
[!NOTE] Pre-release: This project is functional but has not yet been widely tested in production. Expect rough edges and possible breaking changes.
What it does
Once configured, your AI assistant can:
- Send messages to your Telegram chat — plain text, formatted Markdown, photos
- Ask questions and wait for your reply — as free text or button choices
- Post live status updates — an in-place checklist that updates as tasks progress
- React to messages — emoji reactions instead of noise text
- Transcribe voice messages — speak your reply; it arrives as text
- Send and receive files — send documents/photos from disk or URL; receive any file type and download on demand
- Receive all of this in real time — long-polling, no webhooks, no public URL needed
Prerequisites
- Node.js 18+ — nodejs.org
- pnpm — install once via:
npm install -g pnpm
If you prefer npm, you can substitute all pnpm commands with their npm equivalents (npm install, npm run build, etc.). The project works with either.
Quick Start
1. Clone and install
git clone https://github.com/electricessence/Telegram-Bridge-MCP.git
cd Telegram-Bridge-MCP
pnpm install
pnpm build
2. Create a Telegram bot
Open Telegram, message @BotFather, and run /newbot. Copy the token it gives you.
3. Pair the bot to your account
pnpm pair
This interactive wizard:
- Verifies your bot token
- Generates a one-time pairing code
- Waits for you to send that code to your bot in Telegram
- Captures your user ID and chat ID
- Writes everything to
.env
4. Configure your MCP host
VS Code — add to .vscode/mcp.json:
{
"servers": {
"telegram": {
"type": "stdio",
"command": "node",
"args": ["dist/index.js"],
"cwd": "/absolute/path/to/telegram-bridge-mcp",
"env": {
"BOT_TOKEN": "YOUR_TOKEN",
"ALLOWED_USER_ID": "YOUR_USER_ID",
"ALLOWED_CHAT_ID": "YOUR_CHAT_ID"
}
}
}
}
Claude Desktop — add to claude_desktop_config.json:
{
"mcpServers": {
"telegram": {
"command": "node",
"args": ["/absolute/path/to/telegram-bridge-mcp/dist/index.js"],
"env": {
"BOT_TOKEN": "YOUR_TOKEN",
"ALLOWED_USER_ID": "YOUR_USER_ID",
"ALLOWED_CHAT_ID": "YOUR_CHAT_ID"
}
}
}
}
5. Start a session
Paste the contents of LOOP-PROMPT.md into your AI assistant's chat. It will connect, announce itself over Telegram, and wait for your instructions.
Tools
High-level (use these 99% of the time)
| Tool | What it does |
|---|---|
get_agent_guide |
Loads the behavioral guide — call this at session start |
set_topic |
Sets a default title prepended to all outbound messages as [Title] — e.g. [Refactor Agent]. Useful when multiple VS Code instances share one Telegram chat so you can tell which agent sent what. Pass empty string to clear. |
notify |
Silent or audible notification with title, body, and severity |
ask |
Sends a question; blocks until you reply with text |
choose |
Sends a question with buttons; blocks until you tap one |
send_confirmation |
Yes/No prompt wired to wait_for_callback_query |
update_status |
Live in-place checklist — updates as steps complete |
Messaging
send_message · edit_message_text · forward_message · delete_message · pin_message · send_chat_action · show_typing · cancel_typing
Files
send_document · send_photo · send_video · send_audio · send_voice · download_file
Interaction primitives
wait_for_message · wait_for_callback_query · answer_callback_query
Info & utilities
get_me · get_chat · set_commands · set_reaction · get_updates · restart_server
set_commands — registers (or clears) the bot's slash-command menu in the active chat. Pass [{command, description}, ...] to show commands in Telegram's autocomplete; pass [] to remove the menu.
Resources
Three guides are available as MCP resources — any MCP client can read them directly:
| Resource URI | Contents |
|---|---|
telegram-bridge-mcp://agent-guide |
Behavioral guide for AI assistants |
telegram-bridge-mcp://setup-guide |
Full bot setup walkthrough |
telegram-bridge-mcp://formatting-guide |
Markdown/MarkdownV2/HTML reference |
Security
The server enforces a strict two-layer security model:
ALLOWED_USER_ID— Inbound updates from any other user are silently discarded before the assistant ever sees them. Prevents message injection.ALLOWED_CHAT_ID— Outbound tool calls to any other chat are rejected immediately. Prevents misdirected messages.
The server is designed for single-user, single-chat use — chat_id is never a tool parameter; it is resolved from config transparently.
See SETUP.md for the full security model and threat analysis.
Voice Transcription
All message-receiving tools (wait_for_message, ask, choose, get_updates) automatically transcribe voice messages using a local Whisper model via @huggingface/transformers (ONNX Runtime).
- No external API calls
- No ffmpeg required
- Model weights are downloaded once on first use and cached locally
Configure via environment variables:
WHISPER_MODEL=onnx-community/whisper-base # default
WHISPER_CACHE_DIR=/path/to/cache # optional
Development
pnpm build # Compile TypeScript
pnpm dev # Watch mode
pnpm test # Run tests
pnpm coverage # Test coverage report
pnpm pair # Re-run pairing wizard
License
MIT — see LICENSE.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。