Problem: Notion Notes Pile Up Faster Than You Can Organize Them
You dump raw notes into Notion — meeting transcripts, research snippets, bookmarks — but tagging, summarizing, and filing them manually doesn't scale. By day three, your inbox database is a graveyard.
n8n can watch your Notion inbox, pass each new page through an LLM, and write the summary, tags, and destination database back — without you touching it.
You'll learn:
- How to trigger an n8n workflow when a new Notion page appears
- How to send page content to Claude or GPT-4o and parse structured output
- How to write AI-generated fields back to Notion automatically
Time: 30 min | Difficulty: Intermediate
Why Manual Tagging Fails at Scale
Notion's built-in AI costs $10/month per seat and offers no custom logic. If you want to route a note to a specific database based on its content, or apply your own taxonomy, you need a workflow layer outside Notion.
n8n gives you that layer: it polls Notion's API, lets you run arbitrary code or call any LLM, and writes results back. You own the logic and the cost.
What this workflow does:
- Triggers every 5 minutes on new pages in an "Inbox" database
- Sends page content to an LLM with a structured prompt
- Parses JSON output:
summary,tags[],category - Updates the Notion page with those fields and moves it out of Inbox
Prerequisites
- n8n self-hosted (Docker) or n8n Cloud account
- Notion account with API access enabled
- An LLM API key (OpenAI or Anthropic)
- Basic familiarity with n8n's canvas
Solution
Step 1: Create Your Notion Databases
You need two databases before touching n8n.
Inbox database — where raw notes land:
| Property | Type |
|---|---|
| Name | Title |
| Content | Text |
| Status | Select: Inbox, Processed |
| Summary | Text |
| Tags | Multi-select |
| Category | Select |
Create it in Notion: New page → /database → Table. Add the properties above.
Copy the database ID from the URL:
https://notion.so/yourworkspace/[DATABASE_ID]?v=...
Step 2: Set Up a Notion Integration
Notion Settings → Connections → Develop or manage integrations → New integration
- Name:
n8n-ai-processor - Capabilities: Read content ✅, Update content ✅, Insert content ✅
- Copy the Internal Integration Token
Then share your Inbox database with this integration:
Open database → ••• menu → Add connections → n8n-ai-processor
Step 3: Configure Notion Credentials in n8n
In n8n:
Settings → Credentials → New → Notion API
Paste your Internal Integration Token. Name it Notion Main.
Then add your LLM credential:
Settings → Credentials → New → OpenAI API (or Anthropic)
Step 4: Build the Trigger Node
Create a new workflow. Add the first node:
- Node:
Schedule Trigger - Interval: Every 5 minutes
Next, add a Notion node to fetch unprocessed pages:
- Node:
Notion - Operation:
Get Many(Database Items) - Database ID: paste your Inbox database ID
- Filter:
{ "property": "Status", "select": { "equals": "Inbox" } } - Page Size: 10
This returns up to 10 unprocessed pages per run. At scale, increase the page size or add a loop.
Step 5: Extract Page Content
Notion's API returns properties but not the page body. Add a second Notion node to get full content:
- Node:
Notion - Operation:
Get(Page) - Page ID:
{{ $json.id }}
Then add a Code node to flatten the block content into plain text:
// Notion blocks come as an array — join them into one string for the LLM
const blocks = $input.all();
const text = blocks
.map(b => {
const block = b.json;
// Handle paragraph and heading blocks
const richText = block.paragraph?.rich_text
|| block.heading_1?.rich_text
|| block.heading_2?.rich_text
|| block.bulleted_list_item?.rich_text
|| [];
return richText.map(t => t.plain_text).join('');
})
.filter(line => line.trim().length > 0)
.join('\n');
return [{ json: { pageText: text, pageId: $('Notion1').first().json.id } }];
Step 6: Call the LLM with a Structured Prompt
Add an OpenAI node (or HTTP Request node for Anthropic):
- Node:
OpenAI - Operation:
Message a Model - Model:
gpt-4o(orclaude-sonnet-4-20250514via HTTP) - System prompt:
You are a knowledge management assistant. Analyze the note and return ONLY valid JSON — no markdown fences, no explanation.
Schema:
{
"summary": "2–3 sentence summary of the note",
"tags": ["tag1", "tag2", "tag3"],
"category": "one of: Research | Meeting | Task | Reference | Idea"
}
- User message:
{{ $json.pageText }}
- Max tokens: 400
Why JSON-only in the system prompt: LLMs occasionally wrap output in markdown code fences. Instructing them not to — and validating in the next step — is more reliable than regex stripping.
Step 7: Parse LLM Output
Add a Code node to safely parse the JSON response:
// LLMs sometimes return fenced JSON despite instructions — strip it defensively
const raw = $input.first().json.message.content;
const cleaned = raw.replace(/```json|```/g, '').trim();
let parsed;
try {
parsed = JSON.parse(cleaned);
} catch (e) {
// If parsing fails, log and skip — don't break the workflow
return [{ json: { error: 'parse_failed', raw, pageId: $('Code').first().json.pageId } }];
}
return [{
json: {
summary: parsed.summary || '',
tags: parsed.tags || [],
category: parsed.category || 'Reference',
pageId: $('Code').first().json.pageId
}
}];
Step 8: Write Results Back to Notion
Add a final Notion node:
- Node:
Notion - Operation:
Update(Page) - Page ID:
{{ $json.pageId }} - Properties to update:
| Property | Value |
|---|---|
| Summary | {{ $json.summary }} |
| Tags | {{ $json.tags }} |
| Category | {{ $json.category }} |
| Status | Processed |
Setting Status to Processed removes the page from the next poll cycle.
Verification
Add one test note to your Notion Inbox database:
Name: "Meeting notes: backend sync 2026-03-10"
Content: "Discussed moving auth to Edge Functions. Need to benchmark latency vs current Node setup. Follow up with devops team by Friday."
Status: Inbox
Run the workflow manually in n8n (click Test Workflow).
You should see:
- The page fetched in the Notion trigger node
- LLM output in the OpenAI node: a valid JSON block
- The Notion update node showing
200 OK
Check Notion — the page should now have:
- A 2–3 sentence summary
- Tags like
["auth", "edge-functions", "devops"] - Category:
Meeting - Status:
Processed
If the update node returns 404:
- Confirm the integration is shared with the database (Step 2)
- Verify the page ID is being passed correctly through the Code node
If tags aren't saving:
- Notion's Multi-select requires tags to exist already, OR you must set
"type": "multi_select"and pass[{"name": "tag"}]format — update your Notion node mapping to:{{ $json.tags.map(t => ({ name: t })) }}
Scaling This Workflow
Once the base workflow runs cleanly, two common extensions:
Route to different databases by category:
Add an IF or Switch node after the parse step. If category === "Task", update a Tasks database instead of Inbox. If category === "Meeting", append to a Meetings database.
Handle large pages:
Notion pages with 2,000+ words will exceed GPT-4o's useful summary range. Add a Code node before the LLM call to truncate pageText to 3,000 characters:
const truncated = $json.pageText.slice(0, 3000);
return [{ json: { ...$json, pageText: truncated } }];
What You Learned
- Notion's API separates page properties from block content — you need two API calls to get both
- Structured JSON prompts are more reliable when paired with defensive parsing in the next node
- Setting a
Statusfield is the simplest way to prevent double-processing in a polling workflow
Limitation: This workflow polls every 5 minutes. For real-time processing, switch the trigger to a Notion webhook — available on Notion's Business plan and above.
Tested on n8n 1.82, Notion API 2022-06-28, GPT-4o (March 2026), Ubuntu 24.04