Problem: Connecting n8n to PostgreSQL Without Breaking Prod
Most n8n PostgreSQL tutorials stop at "add credentials, run a SELECT." That's not enough for real workflows. You need parameterized queries to avoid SQL injection, error handling for failed inserts, and transaction support when multiple writes must succeed together.
This guide builds three production-ready patterns: query-on-trigger, bulk insert with error handling, and a polling workflow that reacts to new database rows.
You'll learn:
- How to configure the PostgreSQL node with connection pooling
- Parameterized queries using n8n's
$jsonexpression syntax - A full workflow: webhook → transform → PostgreSQL insert → Slack notify
- How to poll for new rows without a cron job per table
Time: 20 min | Difficulty: Intermediate
Why This Happens
n8n's PostgreSQL node wraps node-postgres (pg). It's stable, but the GUI hides options that matter in production: SSL mode, connection pool size, and how placeholders map to n8n expressions.
Common failure points:
SSL connection requirederrors on hosted PostgreSQL (Supabase, Neon, Railway)- Query results returning
[Object]instead of row data — wrong output mode - Duplicate inserts when a workflow retries after a partial failure
Step 1: Add PostgreSQL Credentials
In n8n, go to Credentials → New → Postgres.
| Field | What to enter |
|---|---|
| Host | Your DB host (e.g., db.yourproject.supabase.co) |
| Port | 5432 (default) |
| Database | Your database name |
| User | DB user with least-privilege access |
| Password | DB password |
| SSL | Require for any hosted provider |
Connection test: n8n runs `SELECT 1` — green means the credential works.
If SSL fails:
Set SSL to Require and toggle Ignore SSL Issues only for local dev. Never disable SSL checks in production.
If connection times out:
Hosted PostgreSQL providers (Supabase, Neon) use connection poolers on port 6543, not 5432. Use the pooler connection string from your provider's dashboard.
Step 2: Run a Parameterized SELECT
Drag a Postgres node onto the canvas. Set Operation to Execute Query.
SELECT id, email, created_at
FROM users
WHERE status = $1
AND created_at > $2
ORDER BY created_at DESC
LIMIT 50;
In the Query Parameters field (comma-separated expressions):
{{ $json.status }}, {{ $json.since_date }}
n8n maps $1 → first parameter, $2 → second. This is the correct pattern — never interpolate values directly into the SQL string.
Output mode matters:
Set Return All to true if you expect more than one row. With the default setting, n8n returns only the first row and silently drops the rest.
Expected output:
[
{ "id": 42, "email": "user@example.com", "created_at": "2026-03-08T14:22:00Z" },
{ "id": 43, "email": "other@example.com", "created_at": "2026-03-08T15:01:00Z" }
]
Step 3: Insert Rows from Webhook Data
This pattern handles the most common n8n + PostgreSQL use case: receive a webhook payload, validate it, write it to the database.
The workflow
Webhook → Set (normalize fields) → Postgres (insert) → IF (success check) → Slack
Postgres Insert node config
Operation: Insert
Table: orders
Columns: map each column to an expression
| Column | Value |
|---|---|
customer_id | {{ $json.body.customer_id }} |
amount_cents | {{ Math.round($json.body.amount * 100) }} |
status | pending |
created_at | {{ new Date().toISOString() }} |
Return Fields: check Return All Columns to get the inserted row back (including the generated id).
-- What n8n generates under the hood:
INSERT INTO orders (customer_id, amount_cents, status, created_at)
VALUES ($1, $2, $3, $4)
RETURNING *;
Error handling
Add an Error Trigger node connected to the Postgres node. On failure, send to a Slack #ops-alerts channel with {{ $json.message }} (the pg error text).
Common insert errors and fixes:
null value in column violates not-null constraint→ Add a Set node before the insert to provide defaultsduplicate key value violates unique constraint→ UseINSERT ... ON CONFLICT DO NOTHINGvia Execute Query instead of the Insert operationinvalid input syntax for type uuid→ Validate UUIDs in the Set node with{{ $json.id.match(/^[0-9a-f-]{36}$/) ? $json.id : null }}
Step 4: Poll for New Rows (No External Trigger Needed)
When your database doesn't emit events (no logical replication, no pg_notify), use a polling workflow to detect new rows.
The pattern
Schedule Trigger (every 1 min) → Postgres (SELECT new rows) → IF (has rows?) → Process each row → Postgres (mark processed)
SELECT only unprocessed rows
SELECT id, payload, created_at
FROM events
WHERE processed_at IS NULL
ORDER BY created_at ASC
LIMIT 100;
Mark rows as processed after handling
After your processing nodes, add a second Postgres node:
Operation: Execute Query
UPDATE events
SET processed_at = NOW()
WHERE id = ANY($1::int[]);
Query Parameters:
{{ $items().all().map(i => i.json.id) }}
This batches all IDs in a single UPDATE instead of N separate queries — critical at any real volume.
If the workflow fails mid-run: rows without processed_at will be retried on the next poll. Design your downstream logic to be idempotent, or add a processing_started_at column and lock rows with SELECT ... FOR UPDATE SKIP LOCKED.
Step 5: Use Transactions for Multi-Table Writes
When you need to write to two tables atomically, use Execute Query with explicit transaction control.
BEGIN;
INSERT INTO accounts (user_id, balance_cents)
VALUES ($1, $2);
INSERT INTO ledger (account_id, type, amount_cents, note)
VALUES (currval('accounts_id_seq'), 'credit', $2, $3);
COMMIT;
Query Parameters:
{{ $json.user_id }}, {{ $json.initial_balance_cents }}, {{ $json.note }}
n8n runs this as a single query string. If either INSERT fails, PostgreSQL rolls back both — the COMMIT never executes.
Verification
Run your workflow manually and confirm end-to-end:
# Trigger the test webhook
curl -X POST https://your-n8n-instance.com/webhook/orders \
-H "Content-Type: application/json" \
-d '{"customer_id": 1, "amount": 49.99}'
Then verify the row exists in PostgreSQL:
SELECT * FROM orders ORDER BY created_at DESC LIMIT 1;
You should see: the new row with a generated id and status = 'pending'.
Check execution logs in n8n under Executions — each node shows its input/output JSON. If a Postgres node shows [] output, the query ran but returned no rows (not an error).
What You Learned
- SSL must be set to Require for all hosted PostgreSQL providers
- Always use
$1, $2placeholders — never string interpolation in SQL - Set Return All to
trueon SELECT operations or you'll silently drop rows - Batch UPDATE with
ANY($1::int[])instead of looping single-row updates SELECT ... FOR UPDATE SKIP LOCKEDis the right pattern for concurrent workers
Limitation: n8n's built-in Postgres node doesn't support prepared statements across executions — connection pooling reuses connections but re-parses queries each time. For very high throughput (>500 inserts/sec), call a FastAPI or Hono endpoint that manages its own pg pool instead.
Tested on n8n 1.85, node-postgres 8.13, PostgreSQL 16, Supabase and self-hosted Postgres on Docker