LangGraph Studio: Visual Debugger for Agent Graphs

Use LangGraph Studio to visualize, step-debug, and replay AI agent graphs locally. Stop guessing what your agent is doing between nodes.

Problem: You Can't See What Your Agent Is Doing

Your LangGraph agent runs, produces a wrong answer, and you have no idea which node failed, which edge was taken, or what the state looked like mid-run. Adding print() everywhere is painful and you still can't replay specific steps.

LangGraph Studio solves this. It's a local desktop IDE that renders your graph as a live diagram, lets you step through node execution, inspect state at every checkpoint, and replay any run from any point.

You'll learn:

  • How to connect LangGraph Studio to an existing agent project
  • How to step through a run node by node and inspect state diffs
  • How to use time-travel replay to rerun from a specific checkpoint

Time: 20 min | Difficulty: Intermediate


Why Standard Logging Isn't Enough

LangGraph agents are stateful graphs. A single user turn can hit 10+ nodes, fork into parallel branches, and loop back through a router. When something goes wrong, you need to know:

  • Which node produced the bad state key
  • Which conditional edge fired (and why)
  • What the full state snapshot looked like at each transition

Print statements give you fragments. LangGraph Studio gives you the whole graph, animated in real time, with state diffs at every node boundary.

Symptoms that send you here:

  • Agent ends in the wrong node or hits END too early
  • A state key is None when it shouldn't be — and you don't know which node dropped it
  • Agent loops infinitely and you can't tell which edge is misfiring

How LangGraph Studio Works

Studio connects to a running LangGraph API server (backed by langgraph-cli) via a local HTTP endpoint. Your graph definition stays in your project — Studio just reads it and visualises the execution trace stored in the checkpointer.

Your Code  ──▶  langgraph-cli dev server  ──▶  LangGraph Studio (desktop app)
                  (port 2024, local)              reads graph + checkpoints

Every node execution writes a checkpoint. Studio reads those checkpoints to render the animated graph and lets you fork a new run from any checkpoint — that's the time-travel feature.


Solution

Step 1: Install Prerequisites

You need Python 3.11+, uv (recommended), and Docker running locally — the dev server uses Docker to sandbox your graph.

# Install uv if you don't have it
curl -Lsf https://astral.sh/uv/install.sh | sh

# Verify Docker is running
docker info | grep "Server Version"

If Docker isn't running: Studio won't start the dev server. Open Docker Desktop first.


Step 2: Add langgraph-cli to Your Project

# In your agent project directory
uv add langgraph-cli --dev

# Verify
uv run langgraph --version
# langgraph-cli 0.1.x

If you're using pip instead:

pip install "langgraph-cli[inmem]" --break-system-packages

The [inmem] extra includes the in-memory checkpointer, which is required for time-travel in local dev.


Step 3: Create langgraph.json

Studio needs a config file at your project root to find your graph entry point.

{
  "dependencies": ["."],
  "graphs": {
    "agent": "./my_agent/graph.py:graph"
  },
  "env": ".env"
}

The graphs key maps a name (shown in Studio's sidebar) to a Python module path and the variable name of your compiled graph. If your file is src/agent.py and your graph is compiled_graph, use "./src/agent.py:compiled_graph".

Common mistake: pointing to the StateGraph builder instead of the compiled graph. You need the result of .compile(), not the builder object itself.

# graph.py

from langgraph.graph import StateGraph, END
from .nodes import call_model, run_tool
from .state import AgentState

builder = StateGraph(AgentState)
builder.add_node("model", call_model)
builder.add_node("tools", run_tool)
builder.set_entry_point("model")
builder.add_conditional_edges("model", should_continue, {"tools": "tools", "end": END})
builder.add_edge("tools", "model")

# ✅ Point langgraph.json at this variable, not at `builder`
graph = builder.compile()

Step 4: Start the Dev Server

uv run langgraph dev

Expected output:

Ready!
- API: http://localhost:2024
- Docs: http://localhost:2024/docs
- LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024

The server hot-reloads when you edit your Python files — no restart needed during development.

If it fails:

  • Port 2024 already in uselsof -i :2024 to find and kill the process, then retry
  • Error: graph not found → Check the module path in langgraph.json matches your file structure exactly
  • Docker daemon not running → Start Docker Desktop and retry

Step 5: Open LangGraph Studio

Click the Studio URL printed by langgraph dev, or navigate to:

https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024

Studio opens in your browser and connects to your local server. You'll see your graph rendered as a node-edge diagram immediately.

For the standalone desktop app (macOS only):

# Download from the LangChain releases page
# Requires macOS 13+ (Ventura or later)
open /Applications/LangGraph\ Studio.app
# Enter: http://localhost:2024 in the server URL field

Step 6: Run and Inspect Your Agent

In Studio's left panel, find the Input section. Paste a test input matching your graph's input schema and click Run.

You'll see nodes light up as they execute. Click any node after it finishes to open the State Inspector on the right — it shows the full state snapshot at that point, with a diff view highlighting what changed.

Before node "model":               After node "model":
{                                  {
  messages: [HumanMessage(...)],     messages: [HumanMessage(...),
  tool_calls: null                              AIMessage(tool_calls=[...])],
}                                    tool_calls: [{name: "search", ...}]
                                   }

The diff is colour-coded: green for added keys, red for removed, yellow for modified. This is where you'll spot exactly which node corrupted a state key.


Step 7: Use Time-Travel to Replay From a Checkpoint

If a run went wrong, you don't need to rerun from the start. In the Thread panel, find the checkpoint at the node just before the failure. Click Fork from here.

Studio creates a new thread forked at that checkpoint. You can edit the state directly in the inspector before resuming — useful for testing "what if this key had a different value."

# You can also do this via the API if you prefer CLI
curl -X POST http://localhost:2024/threads/{thread_id}/runs \
  -H "Content-Type: application/json" \
  -d '{
    "checkpoint_id": "checkpoint-abc123",
    "input": null
  }'

This replays execution from that exact checkpoint forward, using the (optionally modified) state.


Verification

With the dev server running, confirm Studio can read your graph via the API:

curl http://localhost:2024/assistants/search \
  -H "Content-Type: application/json" \
  -d '{"limit": 10}' | python3 -m json.tool

You should see: a JSON array containing your graph name ("agent" or whatever you set in langgraph.json) with a graph_id field.

Run a test invocation directly via the API to confirm the checkpoint system works:

# Create a thread
THREAD=$(curl -s -X POST http://localhost:2024/threads \
  -H "Content-Type: application/json" -d '{}' | python3 -c "import sys,json; print(json.load(sys.stdin)['thread_id'])")

# Run the graph
curl -X POST http://localhost:2024/threads/$THREAD/runs \
  -H "Content-Type: application/json" \
  -d '{"assistant_id": "agent", "input": {"messages": [{"role": "user", "content": "hello"}]}}'

# List checkpoints for that thread
curl http://localhost:2024/threads/$THREAD/history | python3 -m json.tool

You should see: a list of checkpoint objects, one per node execution, each with a values field containing the full state snapshot.


Production Considerations

Studio is a local development tool only — it connects to localhost and is not designed for remote or production use. For production observability, pair LangGraph with LangSmith tracing, which captures the same node-level execution data but persists it in a hosted dashboard and supports filtering across thousands of runs.

A note on performance: the in-memory checkpointer used in langgraph dev stores all state in RAM. For graphs with large state objects (e.g., base64 images, long document chunks), Studio can become slow to render diffs. If this happens, trim state keys that don't need to persist — store large blobs externally and keep only references in graph state.

Interrupt points are another powerful Studio feature worth knowing. Add interrupt_before=["tools"] to your .compile() call and Studio will pause execution before that node fires, letting you inspect and edit state before approving the next step — effectively a human-in-the-loop breakpoint.

graph = builder.compile(
    # Pause before "tools" fires — Studio will show an "Approve" button
    interrupt_before=["tools"]
)

What You Learned

  • langgraph dev runs a local API server that Studio connects to over HTTP — your graph code stays in your project
  • The langgraph.json config maps graph names to compiled graph variables — point it at .compile() output, not the builder
  • Time-travel replay forks a new thread from any checkpoint, with optional state edits before resuming
  • interrupt_before turns Studio into a human-in-the-loop debugger, pausing before any node you specify

Limitation: Studio's web UI requires connecting to smith.langchain.com to load the frontend, even though your graph data stays local. If you're working in an air-gapped environment, use the macOS desktop app instead.

Tested on langgraph-cli 0.1.55, LangGraph 0.2.x, Python 3.12, macOS 15 and Ubuntu 24.04