Problem: Generating Content at Scale Is Slow and Repetitive
You have 500 product descriptions to write. Or a weekly blog pipeline that pulls trending topics, drafts posts, and pushes them to your CMS. Doing this manually — or in a fragile Python script — doesn't scale.
n8n + the OpenAI API solves this. You get a visual workflow that triggers on a schedule or webhook, calls GPT-4o for generation, handles retries, and routes output to wherever you need it: Google Sheets, Notion, WordPress, Slack.
You'll learn:
- How to connect n8n to the OpenAI API with credential management
- How to build a reusable content generation workflow with dynamic prompts
- How to add error handling, retry logic, and output routing for production use
Time: 25 min | Difficulty: Intermediate
Why This Happens
n8n has a native OpenAI node since v1.22, but most tutorials stop at "send a prompt, get a reply." Production content workflows need more: dynamic prompts built from upstream data, rate limit handling, output parsing, and conditional routing based on the generated content.
What you're building:
- A trigger (schedule, webhook, or Google Sheets row)
- A prompt builder that injects dynamic variables
- An OpenAI node calling
gpt-4owith a structured system prompt - An output parser that extracts clean text
- A destination node (Sheets, Notion, WordPress REST API, or Slack)
Solution
Step 1: Install or Access n8n
If you're running n8n locally via Docker:
# Start n8n with persistent data volume
docker run -d \
--name n8n \
-p 5678:5678 \
-v n8n_data:/home/node/.n8n \
docker.n8nio/n8n:latest
Open http://localhost:5678 and create your account.
If you're on n8n Cloud, log in at app.n8n.cloud — skip Docker entirely.
Expected: n8n dashboard loads. You see the canvas editor.
Step 2: Add Your OpenAI Credential
- In n8n, go to Settings → Credentials → New Credential
- Search for OpenAI
- Paste your API key from
platform.openai.com/api-keys - Click Save — n8n encrypts and stores it
Credential name: OpenAI Production
API Key: sk-proj-...
If it fails:
401 Unauthorized→ Your key is invalid or has no credits. Checkplatform.openai.com/usage.403 Forbidden→ Your key lacks themodel:readscope. Generate a new key with full permissions.
Step 3: Create the Workflow Canvas
Click New Workflow. You'll build this node chain:
[Trigger] → [Set Variables] → [OpenAI] → [Parse Output] → [Destination]
Add a Schedule Trigger node first (or a Webhook node if you want on-demand generation):
- Schedule Trigger: Run every day at 8am UTC
- Webhook Trigger: POST to
/webhook/content-genwith a JSON body
For this tutorial, use Manual Trigger while building — switch to Schedule or Webhook when ready for production.
Step 4: Build the Prompt with a Set Node
Add a Set node after the trigger. This is where you construct the dynamic prompt before passing it to OpenAI.
Configure two fields:
| Field name | Value |
|---|---|
topic | ={{ $json.topic || "the future of local LLMs" }} |
content_type | ={{ $json.content_type || "blog introduction" }} |
word_count | ={{ $json.word_count || 200 }} |
system_prompt | You are a technical content writer for a developer blog. Write clearly and specifically. No filler phrases like "In today's world". Use short paragraphs. |
user_prompt | Write a {{ $json.content_type }} about {{ $json.topic }} in approximately {{ $json.word_count }} words. |
The || fallback means the workflow works even when fields aren't passed from upstream — useful for testing.
Step 5: Configure the OpenAI Node
Add an OpenAI node. Set:
- Credential: the key you saved in Step 2
- Resource: Chat
- Operation: Message a Model
- Model:
gpt-4o(orgpt-4o-minito cut costs by ~15x) - System Message:
={{ $json.system_prompt }} - User Message:
={{ $json.user_prompt }} - Max Tokens:
600(adjust for your word count targets) - Temperature:
0.7(lower = more consistent, higher = more creative)
{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "{{ system_prompt }}" },
{ "role": "user", "content": "{{ user_prompt }}" }
],
"max_tokens": 600,
"temperature": 0.7
}
Enable "Return Full Response" if you need token usage data for cost tracking.
If it fails:
429 Too Many Requests→ You're hitting OpenAI's rate limit. Add a Wait node (2–5 seconds) before the OpenAI node, or enable the built-in retry in the node's settings (3 retries, 1s backoff).context_length_exceeded→ Your prompt is too long. Trim thesystem_promptor reduceword_count.
Step 6: Parse the Output
The OpenAI node returns a JSON object. The generated text lives at:
{{ $json.message.content }}
Add a Set node after OpenAI to extract and clean the output:
| Field name | Value |
|---|---|
generated_content | ={{ $json.message.content.trim() }} |
tokens_used | ={{ $json.usage.total_tokens }} |
model_used | ={{ $json.model }} |
generated_at | ={{ new Date().toISOString() }} |
Now $json.generated_content holds your clean text for the next node.
Step 7: Route Output to a Destination
Connect a destination node. Three common options:
Option A — Google Sheets (log + review queue):
Add a Google Sheets node:
- Operation: Append Row
- Sheet: your content log sheet
- Columns: map
topic,generated_content,tokens_used,generated_at
Option B — Notion (content database):
Add a Notion node:
- Operation: Create Page
- Database: your content pipeline DB
- Title:
={{ $json.topic }} - Content block:
={{ $json.generated_content }}
Option C — HTTP Request to WordPress REST API:
POST https://yoursite.com/wp-json/wp/v2/posts
Authorization: Basic base64(username:app_password)
Body:
{
"title": "{{ $json.topic }}",
"content": "{{ $json.generated_content }}",
"status": "draft"
}
Use n8n's HTTP Request node with the WordPress App Password (Settings → Users → Application Passwords).
Step 8: Add Error Handling
In production, OpenAI calls fail. Add a fallback path so your workflow doesn't silently drop items.
- On the OpenAI node, click the ... menu → Add Error Output
- Connect the error output to a Slack node (or Gmail node)
- Configure it to send:
Content generation failed.
Topic: {{ $json.topic }}
Error: {{ $json.error.message }}
Workflow: n8n content pipeline
Now you get alerted on every failure instead of losing data.
Verification
Run the workflow manually with a test payload:
{
"topic": "pgvector vs Qdrant for production RAG",
"content_type": "blog introduction",
"word_count": 150
}
You should see:
- The OpenAI node turns green with a response
generated_contentcontains 130–170 words of clean text- Your destination node (Sheets, Notion, etc.) shows the new row or page
tokens_usedshows a number between 200–400
Check token usage in your OpenAI dashboard to confirm API calls are registering:
platform.openai.com/usage
Scaling the Workflow for Batch Generation
To process a list of topics in one run, add a Google Sheets or Airtable node as the trigger instead of Manual/Schedule. Read all rows where status = "pending", then use a SplitInBatches node set to batch size 1 — this processes each topic sequentially and avoids parallel rate limit hits.
[Sheets: Read Rows] → [SplitInBatches (size=1)] → [Set Variables] → [OpenAI] → [Parse] → [Sheets: Update Row status="done"]
Add a Wait node (2 seconds) between SplitInBatches and OpenAI to stay under OpenAI's TPM (tokens per minute) limit on Tier 1 accounts.
What You Learned
- n8n's OpenAI node handles auth, retries, and response parsing — no SDK needed
- Dynamic prompts via Set nodes make one workflow reusable across content types
- Error outputs on the OpenAI node are essential for production — silent failures are worse than loud ones
gpt-4o-miniat $0.15/1M input tokens is the right default for high-volume generation; switch togpt-4oonly for complex reasoning tasks
Limitation: n8n's OpenAI node doesn't support streaming responses. For real-time output to a UI, you'll need a direct HTTP Request node to api.openai.com/v1/chat/completions with "stream": true instead.
Tested on n8n 1.82, OpenAI API gpt-4o-2024-11-20, Docker on Ubuntu 24.04 and n8n Cloud