n8n AI Agent Workflow: Connect GPT-4 to Any API

Build an n8n AI Agent that calls GPT-4 and connects to any REST API. Step-by-step setup with tool nodes, memory, and error handling.

Problem: GPT-4 Can Answer Questions, But Can't Take Action

Calling GPT-4 directly returns text. It can't check your database, submit a form, or hit a webhook on its own. To build a workflow that actually does something — create a ticket, look up a customer, send a Slack message — you need an agent that can use tools.

n8n's AI Agent node solves this. It gives GPT-4 a set of callable tools (HTTP requests, database queries, any n8n node) and lets the model decide which ones to call based on the user's prompt.

You'll learn:

  • How to wire up n8n's AI Agent node with GPT-4o
  • How to expose any REST API as a callable tool
  • How to add memory so the agent can hold a multi-turn conversation
  • How to handle tool errors without breaking the entire workflow

Time: 25 min | Difficulty: Intermediate


Why n8n's AI Agent Node Works Differently

The standard n8n Chat node sends a single message to GPT-4 and returns the response. The AI Agent node runs a loop:

User message
     │
     ▼
GPT-4 decides: "I need to call a tool"
     │
     ▼
n8n executes the tool node (HTTP, DB, etc.)
     │
     ▼
Result goes back to GPT-4
     │
     ▼
GPT-4 decides: "I have enough to respond"
     │
     ▼
Final answer returned to user

GPT-4 calls tools in sequence or in parallel until it can generate a final answer. You control which tools are available — the model picks when and how to use them.

n8n version used: 1.78.x (self-hosted Docker) and n8n Cloud.


Solution

Step 1: Set Up Your n8n Instance

If you're already running n8n, skip to Step 2.

# Run n8n with Docker — persists data to a local volume
docker run -d \
  --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=changeme \
  docker.n8n.io/n8nio/n8n:latest

Open http://localhost:5678 and log in.

Expected output: n8n dashboard loads with an empty workflow canvas.

If it fails:

  • Port already in use → Replace -p 5678:5678 with -p 5679:5678 and open on 5679
  • Permission denied on volume → Add --user $(id -u):$(id -g) before the image name

Step 2: Create a New Workflow and Add the Trigger

  1. Click + New Workflow
  2. Click the + node button and search for Chat Trigger
  3. Add it — this creates a simple chat interface to test the agent

The Chat Trigger exposes a /webhook/chat endpoint that accepts { "chatInput": "your message" }. It also renders a test UI at the trigger's webhook URL while the workflow is open.


Step 3: Add the AI Agent Node

  1. Click + after the Chat Trigger
  2. Search for AI Agent and add it
  3. In the node settings, set:
    • Agent: OpenAI Functions Agent (uses GPT-4's native function-calling)
    • Prompt: {{ $json.chatInput }} (passes the user's message from the trigger)

Connect the Chat Trigger output to the AI Agent input.

At this point the agent has no model and no tools. The next two steps fix that.


Step 4: Attach GPT-4o as the Language Model

  1. Inside the AI Agent node, find the Chat Model sub-node slot and click + Add
  2. Select OpenAI Chat Model
  3. Click Create New Credential and paste your OpenAI API key
  4. Set Model: gpt-4o (or gpt-4o-mini to cut costs during testing)
  5. Set Temperature: 0 for deterministic tool use; raise to 0.3 for conversational tone
# Credential fields
Name: OpenAI (prod)
API Key: sk-...

Save the credential. The model slot in the AI Agent node turns green.


Step 5: Add a REST API as a Tool

This is where n8n's AI Agent shines. Any HTTP request becomes a tool GPT-4 can call.

Example: look up a GitHub user's public profile

  1. Inside the AI Agent node, click + Add Tool
  2. Select HTTP Request Tool
  3. Configure it:
FieldValue
Tool Nameget_github_user
DescriptionFetch a GitHub user's profile by username. Use when the user asks about a GitHub account.
MethodGET
URLhttps://api.github.com/users/{{ $fromAI("username", "The GitHub username to look up") }}

The $fromAI() expression is the key. It tells n8n to extract the username parameter from GPT-4's tool call. The second argument is the description GPT-4 sees — make it precise so the model fills it correctly.

Add headers:

{
  "Accept": "application/github.v3+json",
  "User-Agent": "n8n-agent"
}
  1. Click Save

Test it now: Click Test Workflow, open the Chat Trigger's test UI, and type:

What's the GitHub profile for torvalds?

You should see GPT-4 call get_github_user with username: "torvalds", receive the JSON response, and summarize it.

If it fails:

  • $fromAI is not a function → You're on n8n < 1.68. Update: docker pull docker.n8n.io/n8nio/n8n:latest
  • Tool called but wrong parameter → Rewrite the $fromAI() description to be more explicit

Step 6: Add a Second Tool — POST to Any API

Tools aren't limited to GET requests. Here's a tool that creates a Slack message via webhook:

  1. Add another HTTP Request Tool inside the AI Agent node
  2. Configure:
FieldValue
Tool Namesend_slack_message
DescriptionSend a message to the team Slack channel. Use when the user wants to notify the team or post an update.
MethodPOST
URLhttps://hooks.slack.com/services/YOUR/WEBHOOK/URL
Body (JSON)See below
{
  "text": "{{ $fromAI('message', 'The message text to send to Slack') }}"
}

GPT-4 now has two tools: one that reads data, one that writes. It will chain them when needed — for example, "Look up torvalds on GitHub and post a summary to Slack" calls both tools in sequence automatically.


Step 7: Add Memory for Multi-Turn Conversations

Without memory, every message starts a fresh context. The agent forgets what was said two turns ago.

  1. Inside the AI Agent node, click + Add Memory
  2. Select Window Buffer Memory
  3. Set:
    • Session ID: {{ $('Chat Trigger').item.json.sessionId }} (n8n auto-generates this per browser session)
    • Context Window Length: 10 (keeps last 10 message pairs — enough for most conversations without blowing the context window)

The memory node stores conversation history in n8n's internal storage. Each new message in the same session appends to the history before the agent call.


Step 8: Handle Tool Errors Gracefully

By default, if an HTTP tool returns a 4xx or 5xx, the entire workflow errors out. Fix this:

  1. Click on each HTTP Request Tool node
  2. Under Options → On Error, set to Continue (returns the error response as JSON instead of throwing)
  3. In the tool Description, add: If this tool returns an error, tell the user what went wrong and suggest an alternative.

GPT-4 will now receive the error response and handle it conversationally instead of the workflow crashing.


Verification

With the workflow active (toggle Active in the top-right), send a test message:

curl -X POST https://your-n8n-instance.com/webhook/chat \
  -H "Content-Type: application/json" \
  -d '{"chatInput": "Look up the GitHub user kelseyhightower and send a summary to Slack"}'

You should see: A JSON response with output containing GPT-4's final message, and a Slack notification appearing in your channel.

Check the execution log:

  1. Go to Executions in the left sidebar
  2. Open the latest run
  3. Click through each node — you'll see the tool calls GPT-4 made, the raw API responses, and the final answer

Expected execution path: Chat Trigger → AI Agent → get_github_user (tool call) → send_slack_message (tool call) → AI Agent (final response)


Production Considerations

Rate limits: GPT-4o has a default TPM limit. If you're running many concurrent agent workflows, add a Wait node between retries and set the HTTP tool's timeout to 30 seconds.

Cost control: Each tool call round-trip costs tokens — input + output + tool definitions. A 3-tool agent with 10-message memory uses roughly 2,000–4,000 tokens per turn at gpt-4o rates. Switch to gpt-4o-mini for high-volume non-critical workflows.

Tool descriptions are prompts: The model decides which tool to call based entirely on your description. Vague descriptions cause wrong tool calls or unnecessary calls. Write descriptions like you're explaining to a junior developer which function to use and when.

Memory storage: Window Buffer Memory uses n8n's SQLite by default. For production with multiple concurrent users, switch to Redis Memory or Postgres Memory to avoid write contention.


What You Learned

  • The AI Agent node runs a tool-calling loop — GPT-4 decides which tools to invoke
  • $fromAI() extracts typed parameters from GPT-4's tool call into your HTTP request
  • Window Buffer Memory scopes conversation history per session ID
  • Tool error handling belongs in the node's On Error setting and in the tool description
  • Tool descriptions are the most important thing to get right — they directly control model behavior

When NOT to use the AI Agent node: If your workflow always follows the same path (no branching based on user intent), use a regular n8n workflow with a direct OpenAI node call instead. Agents add latency and token cost for every tool-calling loop — only worth it when the model needs to make decisions about which steps to run.

Tested on n8n 1.78.2, Docker on Ubuntu 24.04 and n8n Cloud. GPT-4o API as of March 2026.