Problem: Your Flowise Agent Can't Access External Data or Logic
Out-of-the-box Flowise tools cover common cases — web search, calculators, file readers. But as soon as you need to hit an internal API, transform data in a specific way, or run business logic before returning a result, the built-in nodes fall short.
The Custom Tool node solves this. It lets you write plain JavaScript that the agent calls like any other tool — with full access to fetch, arguments, and return values.
You'll learn:
- How the Flowise Custom Tool node works under the hood
- How to write a JavaScript function that fetches live data
- How to wire the tool into an agent and test it end-to-end
Time: 20 min | Difficulty: Intermediate
Why Custom Tool Nodes Exist
Flowise agents use LangChain's tool-calling pattern. When an LLM decides it needs information, it emits a structured tool call — name, input arguments — and Flowise routes that call to the matching node.
A Custom Tool node is just a named JavaScript function with a schema. The LLM sees the name and description, decides when to call it, passes arguments that match your schema, and Flowise runs your code. The return value goes back to the LLM as a tool result.
This means anything you can do in a Node.js-like environment — fetch, string manipulation, JSON parsing, math — is available to your agent.
Solution
Step 1: Open the Custom Tool Panel
In your Flowise instance, click Tools in the left sidebar, then + Add New.
You'll see three fields to fill in before writing any code:
- Name — what the LLM calls when it invokes this tool (no spaces, use underscores)
- Description — a plain-English sentence telling the LLM when to use it
- Schema — a JSON Schema object defining the input arguments
The description is the most important field. The LLM decides whether to use the tool based entirely on this text. Be specific about what the tool returns and under what conditions it should be called.
Step 2: Define the Tool Schema
Click Schema and add your input parameters as a JSON Schema object. This example builds a tool that fetches current weather for a city.
{
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city name to get weather for, e.g. 'Tokyo'"
},
"units": {
"type": "string",
"enum": ["metric", "imperial"],
"description": "Temperature unit. Use metric for Celsius, imperial for Fahrenheit."
}
},
"required": ["city"]
}
Keep required fields minimal. Optional fields with defaults give the LLM flexibility without forcing it to always specify every argument.
Step 3: Write the JavaScript Function
In the JavaScript Function editor, write an async function. Flowise injects your schema arguments as named variables automatically — no need to parse them yourself.
// $city and $units are injected by Flowise from the schema
const city = $city;
const units = $units || "metric";
const API_KEY = $vars.OPENWEATHER_API_KEY; // pulled from Flowise env vars
const url = `https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(city)}&units=${units}&appid=${API_KEY}`;
const response = await fetch(url);
// Always check for non-2xx before parsing — fetch doesn't throw on 4xx
if (!response.ok) {
return `Error: Could not fetch weather for "${city}". Status ${response.status}.`;
}
const data = await response.json();
const temp = data.main.temp;
const description = data.weather[0].description;
const humidity = data.main.humidity;
const unitLabel = units === "metric" ? "°C" : "°F";
// Return plain text — LLMs handle prose better than raw JSON
return `${city}: ${temp}${unitLabel}, ${description}, humidity ${humidity}%.`;
Three patterns to always follow:
- Use
$vars.YOUR_VAR_NAMEfor secrets — never hardcode API keys in the function body - Return a plain string, not a JSON object — the LLM parses prose more reliably
- Return an error string (not throw) on failure — throwing crashes the agent chain silently
Step 4: Add Environment Variables
In Flowise settings → Variables, add your secrets as environment variables.
OPENWEATHER_API_KEY = your_key_here
These become available inside any Custom Tool function via $vars.VARIABLE_NAME. They're stored encrypted and never exposed to the LLM's context window.
Step 5: Wire the Tool Into an Agent
In your agent canvas:
- Add a Tool Agent node (or an existing agent you've built)
- Add your new Custom Tool from the Tools dropdown
- Connect it to your LLM and memory nodes as usual
[Chat Input]
│
▼
[Tool Agent] ──uses──▶ [Custom Weather Tool]
│ │
│ fetch() → OpenWeather API
│ │
◀──── tool result ────────┘
│
▼
[Chat Output]
The agent now decides autonomously when to call the weather tool based on the user's message and the tool's description.
Step 6: Add Multi-Step Logic (Optional)
Custom Tool functions aren't limited to a single API call. You can chain requests, transform data, or add conditional logic.
// Example: enrich weather with a UV index lookup
const city = $city;
const units = $units || "metric";
const API_KEY = $vars.OPENWEATHER_API_KEY;
// Step 1: Get coordinates
const geoUrl = `https://api.openweathermap.org/geo/1.0/direct?q=${encodeURIComponent(city)}&limit=1&appid=${API_KEY}`;
const geoRes = await fetch(geoUrl);
if (!geoRes.ok) return `Error: City "${city}" not found.`;
const [geo] = await geoRes.json();
if (!geo) return `Error: No location data for "${city}".`;
// Step 2: Get weather + UV using coordinates
const weatherUrl = `https://api.openweathermap.org/data/2.5/weather?lat=${geo.lat}&lon=${geo.lon}&units=${units}&appid=${API_KEY}`;
const weatherRes = await fetch(weatherUrl);
if (!weatherRes.ok) return `Error: Weather fetch failed. Status ${weatherRes.status}.`;
const weather = await weatherRes.json();
const temp = weather.main.temp;
const description = weather.weather[0].description;
const unitLabel = units === "metric" ? "°C" : "°F";
return `${city} (${geo.country}): ${temp}${unitLabel}, ${description}. Coordinates: ${geo.lat.toFixed(2)}, ${geo.lon.toFixed(2)}.`;
Each await suspends the function — Flowise handles the async lifecycle for you.
Verification
Save the tool and open the agent's built-in chat panel. Send a message that should trigger the tool:
What's the weather like in Osaka right now?
You should see:
- The agent's reasoning trace shows
tool_call: get_current_weatherwith{"city": "Osaka"} - The tool result appears as a system message in the trace
- The final response incorporates the live data naturally
To inspect the raw tool call and return value, enable Verbose mode in the agent node settings. This logs every tool input and output to the Flowise console — essential for debugging schema mismatches.
If the tool never fires:
- LLM ignores the tool → Rewrite the description to be more specific about trigger conditions
$vars.Xreturns undefined → Double-check the variable name matches exactly (case-sensitive)- fetch fails silently → Add
console.log(response.status)inside the function; check Flowise server logs
What You Learned
- The Custom Tool node is a named async JavaScript function — anything
fetch-able is now available to your agent - Tool descriptions drive LLM behavior more than schemas do — write them for the model, not for developers
$varskeeps secrets out of the code and out of the context window- Returning plain strings over JSON makes tool results more reliable across different LLMs
Limitation: Custom Tool functions run in Flowise's server process. They're not sandboxed — avoid running untrusted user input as code, and set timeouts on long-running fetches explicitly using AbortController.
Tested on Flowise 2.2.x, Node.js 20, self-hosted via Docker