Problem: You Want a Personal AI Bot on Telegram — Without Writing a Server
Most Telegram bot tutorials drop you into Python boilerplate, webhook servers, and deployment configs before you've sent a single message.
n8n removes all of that. You wire a Telegram trigger to an AI node, add memory, and you have a working chatbot in one workflow — no code, no server to maintain.
You'll learn:
- How to create a Telegram bot and connect it to n8n
- How to route messages through an OpenAI GPT-4o node
- How to add conversation memory so the bot remembers context
Time: 30 min | Difficulty: Beginner
Why n8n Is the Right Tool for This
A Telegram bot needs three things: receive a message, think, reply. n8n maps this directly to three nodes. You don't need to handle HTTP webhooks, manage state, or deploy anything — n8n's cloud handles the listener, and its built-in AI nodes handle the LLM call.
If you're self-hosting n8n, the Telegram trigger uses a webhook that n8n registers automatically when you activate the workflow.
Solution
Step 1: Create a Telegram Bot with BotFather
Open Telegram and search for @BotFather.
Send this sequence:
/newbot
BotFather will ask for:
- Bot name — the display name (e.g.,
My AI Assistant) - Bot username — must end in
bot(e.g.,myai_helper_bot)
After confirming, BotFather returns your bot token:
1234567890:AAFxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Copy this token. You'll need it in Step 3.
If it fails:
- Username already taken → Try a more unique handle, e.g.,
yourname_ai_bot
Step 2: Create a New Workflow in n8n
In your n8n dashboard, click + New Workflow.
Name it something descriptive: Telegram AI Chatbot.
You'll build three nodes in sequence:
Telegram Trigger → AI Agent (OpenAI) → Telegram (Send Message)
Step 3: Add the Telegram Trigger Node
Click the + canvas button and search for Telegram Trigger.
In the node settings:
- Credential: Click Create new credential, paste your BotFather token, and save
- Updates: Select
message(fires on every incoming message) - Additional fields → Chat ID: Leave empty to accept messages from all chats
Click Test trigger, then send your bot a message in Telegram (e.g., "hello"). The trigger node should show the incoming payload.
Expected output in the node:
{
"message": {
"chat": { "id": 123456789 },
"text": "hello",
"from": { "first_name": "Mark" }
}
}
If nothing arrives:
- Make sure you started a conversation with your bot first (send
/startin Telegram) - On self-hosted n8n, confirm your instance is publicly reachable over HTTPS — Telegram requires it for webhooks
Step 4: Add the AI Agent Node
Click + after the Telegram Trigger and search for AI Agent.
Configure it:
- Chat Model: Select OpenAI Chat Model → create an OpenAI credential with your API key
- Model:
gpt-4o(orgpt-4o-minifor lower cost) - Input: Click the expression editor for the User Message field and map it to:
{{ $json.message.text }} - System Prompt: Write your bot's personality here. Example:
You are a helpful assistant. Keep responses concise and friendly. Reply only in the language the user writes in.
Leave all other settings at their defaults for now.
Step 5: Add Conversation Memory
Without memory, the bot treats every message as the first message. The AI Agent node has a built-in memory option.
In the AI Agent node, scroll to Memory and select Window Buffer Memory.
- Session ID: Map it to the Telegram chat ID so each conversation has its own memory:
{{ $('Telegram Trigger').item.json.message.chat.id }} - Context Window Length:
10(keeps the last 10 message pairs — enough for most conversations)
This stores conversation history in n8n's memory during the workflow session. For persistent memory across restarts, switch to the Postgres or Redis memory node later.
Step 6: Add the Telegram Send Message Node
Click + after the AI Agent and search for Telegram.
Select the Send Message action.
Configure it:
- Credential: Select the same Telegram credential from Step 3
- Chat ID: Map to the incoming chat ID:
{{ $('Telegram Trigger').item.json.message.chat.id }} - Text: Map to the AI Agent's output:
{{ $json.output }}
This sends the AI's reply back to the same chat the message came from.
Step 7: Activate the Workflow
Click Save, then toggle the workflow to Active in the top-right corner.
n8n registers the Telegram webhook automatically. Your bot is now live.
Verification
Send your bot a message in Telegram:
What's the capital of Japan?
You should see: A reply from the bot within 2–3 seconds:
Tokyo is the capital of Japan.
Then test memory:
You: My name is Mark.
Bot: Nice to meet you, Mark!
You: What's my name?
Bot: Your name is Mark.
If the second reply works, memory is wired correctly.
Going Further
Once the base bot works, these are the most useful next steps:
Add a system prompt per user — use a Set node before the AI Agent to build a dynamic system prompt based on $json.message.from.username.
Handle non-text messages — add an IF node after the Telegram Trigger to check $json.message.text exists, and reply with "I only understand text messages" otherwise.
Add tools to the agent — the AI Agent node supports tool nodes (HTTP Request, Google Sheets, Code). Wire in a tool to let the bot look up data or take actions.
Persist memory to Postgres — replace Window Buffer Memory with the Postgres Chat Memory node and point it at a database. Conversations survive n8n restarts.
What You Learned
- The Telegram Trigger → AI Agent → Telegram Send Message pattern is the core of any n8n chatbot
- Chat ID is the key to routing replies back correctly and scoping memory per conversation
- Window Buffer Memory gives context within a session; switch to Postgres for persistence
- n8n handles webhook registration automatically on workflow activation
Limitation: Window Buffer Memory resets when n8n restarts. For production bots with paying users, wire in a persistent memory store before going live.
Tested on n8n 1.85.0, OpenAI GPT-4o, Telegram Bot API 7.x — n8n Cloud and self-hosted Docker