Problem: Polling Supabase Wastes Resources and Misses Events
If you're checking Supabase for new rows on a schedule, you're either missing events between polls or hammering your database every 30 seconds. The n8n Supabase node with real-time triggers solves both problems — your workflow fires the moment a row changes.
You'll learn:
- How to connect n8n to Supabase using API credentials
- How to trigger n8n workflows on INSERT, UPDATE, and DELETE via Supabase Realtime
- How to use the Supabase node for reads, writes, and upserts inside a workflow
Time: 20 min | Difficulty: Intermediate
Why Polling Fails for Event-Driven Workflows
Supabase Realtime uses PostgreSQL's LISTEN/NOTIFY and logical replication to stream row-level changes as WebSocket events. Polling hits the REST API on a fixed interval — you either miss fast-moving events or create unnecessary load.
Symptoms of the polling problem:
- Workflow runs every minute but the event happened 58 seconds ago
- Duplicate processing when two rows insert before the next poll
- High Supabase API usage on the free tier
The solution: use a Supabase Realtime trigger in n8n to fire immediately on database changes.
Solution
Step 1: Get Your Supabase Credentials
You need two things from your Supabase project dashboard:
- Project URL — found under Settings → API → Project URL
- Service Role Key — found under Settings → API → Project API Keys → service_role
Use the service_role key, not the anon key. The anon key respects row-level security, which will silently block n8n from reading data it should access.
Project URL: https://xyzabc.supabase.co
Service Role Key: eyJhbGciOiJIUzI1NiIsInR5cCI6...
Keep the service role key secret — it bypasses all RLS policies.
Step 2: Add Supabase Credentials in n8n
- Open n8n → Credentials → New Credential → Supabase
- Paste your Project URL and Service Role Key
- Click Test — you should see
Connection successful - Save as
Supabase Production(or match your environment)
If the test fails, double-check you used the service_role key, not anon.
Step 3: Enable Realtime on Your Table
Supabase Realtime is off by default per table. Enable it before wiring up the trigger.
In your Supabase dashboard:
- Go to Database → Replication
- Under Supabase Realtime, find your table (e.g.,
orders) - Toggle INSERT, UPDATE, and DELETE as needed
- Click Save
Alternatively, enable it via SQL:
-- Enable Realtime for the orders table
ALTER PUBLICATION supabase_realtime ADD TABLE orders;
Verify it's active:
SELECT schemaname, tablename
FROM pg_publication_tables
WHERE pubname = 'supabase_realtime';
Step 4: Create the n8n Trigger Workflow
- In n8n, create a new workflow
- Add a Supabase Trigger node as the first node
- Configure it:
| Field | Value |
|---|---|
| Credential | Supabase Production |
| Table | orders (or your table name) |
| Events | INSERT (select what you need) |
| Schema | public |
- Click Listen for Test Event
- In your Supabase Table Editor, insert a test row
You should see the payload appear in n8n within 1–2 seconds:
{
"type": "INSERT",
"table": "orders",
"schema": "public",
"record": {
"id": 42,
"customer_id": 7,
"status": "pending",
"total": 129.99,
"created_at": "2026-03-09T10:22:00Z"
},
"old_record": null
}
For UPDATE events, old_record contains the row before the change — useful for detecting which fields changed.
Step 5: Act on the Trigger Data
Wire a Supabase node after the trigger to read related data, or connect any other n8n node.
Example: Fetch the customer record when a new order arrives
Add a Supabase node with this config:
| Field | Value |
|---|---|
| Operation | Get Many |
| Table | customers |
| Filters | id = {{ $json.record.customer_id }} |
Expression for Filter Value:
{{ $json.record.customer_id }}
This pulls the customer row associated with the new order, making it available to downstream nodes (Slack, email, AI agent, etc.).
Step 6: Write Back to Supabase
To update a row after processing (e.g., mark an order as notified):
Add another Supabase node:
| Field | Value |
|---|---|
| Operation | Update |
| Table | orders |
| Matching Column | id |
| Matching Value | {{ $('Supabase Trigger').item.json.record.id }} |
Set the columns to update:
{
"status": "notified",
"notified_at": "{{ $now.toISO() }}"
}
Use Upsert instead of Update when you want to insert if the row doesn't exist yet — common in sync workflows.
Verification
Insert a row directly in the Supabase SQL editor:
INSERT INTO orders (customer_id, status, total)
VALUES (7, 'pending', 129.99);
In n8n, you should see:
- The trigger fires within 2 seconds
- The execution log shows the full
recordpayload - Any downstream nodes (Slack, email, update) complete successfully
Check the execution history under Executions to confirm the payload shape and catch any downstream errors.
Production Considerations
Realtime connection limits: Supabase Free tier allows 200 concurrent Realtime connections. Self-hosted n8n with one trigger counts as one persistent connection per workflow that's active.
Missed events on restart: If n8n restarts, events that occurred during downtime are not replayed. For critical workflows, add a fallback scheduled poll to catch any gap. A 5-minute cron that checks for unprocessed rows (e.g., notified_at IS NULL) covers edge cases.
Filter at the database level: Don't trigger on every row change and filter in n8n — use Supabase's row filter in the Realtime config to narrow the event stream early:
-- Only broadcast orders over $100
ALTER TABLE orders REPLICA IDENTITY FULL;
Then in the Supabase Realtime dashboard, set a filter like total > 100.
Error handling: Add an Error Trigger node in a separate workflow to catch failed executions and alert via Slack or PagerDuty. Silent failures in event-driven workflows are hard to detect otherwise.
What You Learned
- Use
service_rolecredentials —anonwill silently fail on RLS-protected tables - Supabase Realtime must be explicitly enabled per table before the n8n trigger works
old_recordin UPDATE events lets you diff before/after state without extra queries- Add a scheduled fallback poll to handle events missed during n8n downtime
When not to use this approach: If you need to process events in strict order with guaranteed delivery, consider Supabase + a queue (like pgmq or Redis Streams) instead of direct Realtime. Realtime is fire-and-forget — it doesn't retry on n8n failure.
Tested on n8n 1.85, Supabase JS v2, PostgreSQL 15, self-hosted n8n on Docker