Flowise Monitoring: Logs and Analytics Dashboard Setup

Set up Flowise logging and analytics to track chatflow usage, debug LLM errors, and visualize token costs in a self-hosted dashboard.

Problem: Flowise Runs Blind in Production

You've deployed a Flowise chatbot and users are hitting it — but when something breaks you have no idea why. Token costs are climbing with no attribution. Slow responses are hard to reproduce. The default Flowise UI shows conversations, but not latency, error rates, or per-chatflow cost breakdowns.

You'll learn:

  • How to enable and read Flowise's built-in execution logs
  • How to connect Flowise to LangSmith for deep LLM tracing
  • How to expose a Grafana dashboard with token usage and error rates from Flowise's SQLite/Postgres data

Time: 25 min | Difficulty: Intermediate


Why Flowise Doesn't Log by Default

Flowise stores chat messages in its database, but LLM chain traces — the per-step breakdown of prompts, tool calls, and token counts — are not persisted unless you configure a callback handler. Without it, you see the final answer but not what happened inside the chain.

Symptoms of missing observability:

  • Users report wrong answers; you can't reproduce the exact prompt that caused it
  • OpenAI bill spikes but you can't identify which chatflow is responsible
  • Response times are inconsistent; no baseline to compare against

Solution

Step 1: Enable Flowise Debug Logging

Flowise uses the DEBUG environment variable to control log verbosity. Set it before starting the server.

# Docker Compose — add to your flowise service environment block
environment:
  - DEBUG=true
  - LOG_LEVEL=debug        # options: error | warn | info | debug
  - LOG_PATH=/root/.flowise/logs   # writes logs to a mounted volume

If you're running Flowise with npm:

# Set env vars before starting
DEBUG=true LOG_LEVEL=debug npx flowise start

Restart the service and tail the log file:

tail -f ~/.flowise/logs/flowise.log

Expected output: Each chatflow execution now logs chain steps, tool invocations, and raw LLM prompts/responses at the debug level.

If it fails:

  • No log file created → Confirm LOG_PATH directory exists and the Flowise process has write permission
  • Only error-level logs appear → Verify LOG_LEVEL=debug is actually passed into the container (docker inspect <container> | grep LOG_LEVEL)

Step 2: Persist Logs to a Volume (Docker)

Debug logs are useless if they vanish when the container restarts.

# docker-compose.yml
version: "3.8"
services:
  flowise:
    image: flowiseai/flowise:latest
    ports:
      - "3000:3000"
    environment:
      - DEBUG=true
      - LOG_LEVEL=debug
      - LOG_PATH=/root/.flowise/logs
      - DATABASE_PATH=/root/.flowise
    volumes:
      - flowise_data:/root/.flowise       # persists DB and logs together
    restart: unless-stopped

volumes:
  flowise_data:
docker compose up -d
docker compose logs -f flowise   # confirm startup and first log lines

Step 3: Connect Flowise to LangSmith for Chain Tracing

For production observability — per-step latency, token counts, input/output diffs — connect Flowise to LangSmith. Flowise has native support via environment variables.

# Add these to your Flowise environment
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
LANGCHAIN_API_KEY=ls__your_api_key_here
LANGCHAIN_PROJECT=flowise-production      # logical grouping in LangSmith UI

Get your API key at smith.langchain.com → Settings → API Keys.

Restart Flowise. Now every chatflow execution creates a trace in LangSmith automatically — no code changes required inside your chatflows.

What you get in LangSmith:

  • Full chain tree: retriever → prompt → LLM → output parser
  • Per-step latency in milliseconds
  • Token counts per LLM call (prompt + completion + total)
  • Input and output for every node

If traces aren't appearing:

  • AuthenticationError → API key is wrong or expired; regenerate in LangSmith settings
  • Traces appear but missing nodes → Some custom Flowise nodes don't emit LangChain callbacks; this is a known limitation for nodes built without the BaseCallbackHandler interface

Step 4: Query Flowise's Database for Usage Metrics

Flowise stores every chat message in its SQLite database (or Postgres if you configured it). You can query it directly to build usage reports.

SQLite — quick local query:

# Find the DB file
ls ~/.flowise/*.db

# Open with sqlite3
sqlite3 ~/.flowise/database.sqlite

# Token usage isn't stored per-message in Flowise's default schema,
# but conversation volume by chatflow is:
SELECT
  chatflowid,
  COUNT(*) as total_messages,
  MIN(createdDate) as first_message,
  MAX(createdDate) as last_message
FROM chat_message
GROUP BY chatflowid
ORDER BY total_messages DESC;

Postgres setup (recommended for production):

# Switch Flowise to Postgres by adding these env vars
DATABASE_TYPE=postgres
DATABASE_HOST=your-postgres-host
DATABASE_PORT=5432
DATABASE_NAME=flowise
DATABASE_USER=flowise_user
DATABASE_PASSWORD=your_password

Postgres unlocks concurrent writes, proper indexing, and direct connection from Grafana.


Step 5: Build a Grafana Dashboard

Connect Grafana to your Flowise Postgres database to visualize conversation volume, error rates from logs, and latency.

Install Grafana (Docker):

docker run -d \
  --name grafana \
  -p 3001:3000 \
  -e GF_SECURITY_ADMIN_PASSWORD=admin \
  grafana/grafana:latest

Add the Postgres data source:

  1. Open Grafana at http://localhost:3001
  2. Go to Connections → Data Sources → Add → PostgreSQL
  3. Enter your Flowise Postgres credentials
  4. Set SSL Mode to disable for local dev, require for production
  5. Click Save & Test — you should see "Database Connection OK"

Create a messages-per-chatflow panel:

-- Panel query: messages per chatflow per day
SELECT
  DATE_TRUNC('day', "createdDate") AS time,
  "chatflowid",
  COUNT(*) AS messages
FROM chat_message
WHERE "createdDate" >= NOW() - INTERVAL '30 days'
GROUP BY 1, 2
ORDER BY 1;

Set visualization to Time series, group by chatflowid. This gives you a per-chatflow volume chart over the last 30 days.

Error rate panel (from log file via Loki — optional):

If you want to chart LLM errors over time, ship your Flowise log file to Loki and query it in Grafana. This is optional but powerful for catching RateLimitError spikes.

# promtail config to ship Flowise logs to Loki
scrape_configs:
  - job_name: flowise
    static_configs:
      - targets:
          - localhost
        labels:
          job: flowise
          __path__: /root/.flowise/logs/flowise.log

Verification

Run a test message through any active chatflow, then check each layer:

# 1. Confirm the log entry appeared
grep "LLMResult" ~/.flowise/logs/flowise.log | tail -5

# 2. Confirm the DB row was written
sqlite3 ~/.flowise/database.sqlite \
  "SELECT id, chatflowid, createdDate FROM chat_message ORDER BY createdDate DESC LIMIT 3;"

In LangSmith, navigate to your project (flowise-production) — the trace for the test message should appear within 5 seconds.

In Grafana, your messages panel should increment by 1 for the chatflow you tested.

You should see: All three layers (file log, DB row, LangSmith trace) reflect the same execution within 10 seconds of sending the message.


What You Learned

  • LOG_LEVEL=debug + LOG_PATH gives you raw chain logs without any code changes
  • LangSmith is the fastest path to per-step token counts and latency — zero chatflow modifications needed
  • Flowise's Postgres schema is queryable directly; Grafana on top gives you dashboards in minutes
  • SQLite is fine for dev but switch to Postgres before production — Grafana can't connect to SQLite, and concurrent writes degrade SQLite performance under load

Limitation: Flowise doesn't natively store per-call token counts in its own database. For cost attribution per chatflow, LangSmith is currently the only zero-config option. Alternatively, wrap your OpenAI node with a custom tool node that logs to your own table.

Tested on Flowise 2.2.x, Docker 27, Grafana 11, LangSmith SDK v0.1.x, Ubuntu 24.04