Problem: Your Agent Acts Without Asking
LangGraph agents run autonomously — which is great until they delete a database row, send an email, or execute a trade without checking with you first. You need a way to pause execution mid-graph, surface a question to the user, and resume from exactly where you stopped.
That's what interrupt() solves.
You'll learn:
- How
interrupt()pauses a graph node and surfaces a value to the caller - How to resume execution with
Command(resume=...)after human input - How to wire this into a real agent with tool approval
Time: 20 min | Difficulty: Intermediate
Why interrupt() Exists
Before LangGraph 0.2, human-in-the-loop required manually splitting graphs into pre- and post-approval subgraphs. It worked but was brittle — state had to be serialized across two separate invocations, and resuming from the right node required careful bookkeeping.
interrupt() replaces that pattern. It's a Python function you call inside any node. The graph suspends at that point, checkpoints its state, and returns control to the caller. When you call .invoke() again with a Command(resume=value), the graph picks up from the interrupted node with your input injected.
Key requirement: You must use a checkpointer. Without one, there's no persisted state to resume from.
How interrupt() Works
Node A ──▶ Node B (interrupt()) ──▶ Node C
│ ▲
suspends here │
│ │
caller gets value │
│ │
human provides input │
└── Command(resume) ───┘
The graph stores its full state in the checkpointer at the moment interrupt() is called. On resume, execution re-enters the interrupted node — but interrupt() now returns the resume value instead of suspending again.
Solution
Step 1: Install Dependencies
# LangGraph 0.2+ includes interrupt() — earlier versions don't have it
pip install "langgraph>=0.2.0" langchain-openai
Verify:
python -c "from langgraph.types import interrupt; print('ok')"
Expected: ok
Step 2: Build a Node That Interrupts
from langgraph.types import interrupt
from langgraph.graph import StateGraph, END
from typing import TypedDict
class AgentState(TypedDict):
action: str
approved: bool
result: str
def approval_node(state: AgentState) -> AgentState:
# interrupt() pauses the graph and returns the value to the caller
# When resumed, it returns whatever was passed to Command(resume=...)
human_response = interrupt({
"question": f"Approve this action: '{state['action']}'?",
"options": ["yes", "no"],
})
return {"approved": human_response == "yes"}
def execute_node(state: AgentState) -> AgentState:
if not state["approved"]:
return {"result": "Action cancelled by user."}
# Safe to execute — human approved
return {"result": f"Executed: {state['action']}"}
# Build the graph
builder = StateGraph(AgentState)
builder.add_node("approval", approval_node)
builder.add_node("execute", execute_node)
builder.set_entry_point("approval")
builder.add_edge("approval", "execute")
builder.add_edge("execute", END)
Step 3: Add a Checkpointer
from langgraph.checkpoint.memory import MemorySaver
# MemorySaver stores state in-process — good for dev and testing
# For production, use PostgresSaver or RedisSaver
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
For production persistence, swap in the Postgres checkpointer:
pip install "langgraph-checkpoint-postgres"
from langgraph.checkpoint.postgres import PostgresSaver
# Replace MemorySaver with this in production
checkpointer = PostgresSaver.from_conn_string("postgresql://user:pass@localhost/db")
Step 4: Run the Graph and Handle the Interrupt
# thread_id ties all invocations together — use one ID per conversation
config = {"configurable": {"thread_id": "thread-001"}}
initial_state = {"action": "DELETE all records from orders table", "approved": False, "result": ""}
# First invocation — will hit interrupt() and suspend
result = graph.invoke(initial_state, config=config)
# result is an Interrupt object, not the final state
print(result)
# {'__interrupt__': ({'question': "Approve this action: 'DELETE all records...'?", 'options': ['yes', 'no']},)}
Step 5: Resume With Human Input
from langgraph.types import Command
# Simulate user saying "no"
# Command(resume=...) injects the value into the waiting interrupt() call
final_result = graph.invoke(Command(resume="no"), config=config)
print(final_result["result"])
# Action cancelled by user.
If the user approves:
final_result = graph.invoke(Command(resume="yes"), config=config)
print(final_result["result"])
# Executed: DELETE all records from orders table
Step 6: Full Agent with Tool Approval
Here's a realistic pattern — an agent that pauses before calling any destructive tool:
from langgraph.types import interrupt, Command
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, Literal
class ToolState(TypedDict):
tool_name: str
tool_args: dict
approved: bool
tool_result: str | None
DESTRUCTIVE_TOOLS = {"delete_records", "send_email", "execute_trade"}
def should_interrupt(state: ToolState) -> Literal["interrupt_node", "execute_node"]:
# Route to approval only for destructive tools
if state["tool_name"] in DESTRUCTIVE_TOOLS:
return "interrupt_node"
return "execute_node"
def interrupt_node(state: ToolState) -> ToolState:
response = interrupt({
"message": f"Tool '{state['tool_name']}' requires approval.",
"args": state["tool_args"],
})
return {"approved": response == "approve"}
def execute_node(state: ToolState) -> ToolState:
# For non-destructive tools, approved defaults to True
if not state.get("approved", True):
return {"tool_result": "Blocked by user."}
# Call your actual tool here
return {"tool_result": f"Tool {state['tool_name']} ran with args {state['tool_args']}"}
builder = StateGraph(ToolState)
builder.add_node("interrupt_node", interrupt_node)
builder.add_node("execute_node", execute_node)
builder.set_entry_point("interrupt_node") # Always check first
builder.add_edge("interrupt_node", "execute_node")
builder.add_edge("execute_node", END)
graph = builder.compile(checkpointer=MemorySaver())
Verification
config = {"configurable": {"thread_id": "test-001"}}
# Kick off a destructive tool call
state = {
"tool_name": "send_email",
"tool_args": {"to": "ceo@company.com", "subject": "Q4 financials"},
"approved": False,
"tool_result": None,
}
result = graph.invoke(state, config=config)
# Should be an interrupt, not a final result
assert "__interrupt__" in result, "Expected interrupt — not received"
print("✅ Graph paused at interrupt")
# Resume with approval
final = graph.invoke(Command(resume="approve"), config=config)
assert "send_email" in final["tool_result"]
print("✅ Graph resumed and tool executed:", final["tool_result"])
Expected output:
✅ Graph paused at interrupt
✅ Graph resumed and tool executed: Tool send_email ran with args {'to': 'ceo@company.com', 'subject': 'Q4 financials'}
What You Learned
interrupt(value)suspends a node and returnsvalueto the caller — the graph's state is checkpointed at that momentCommand(resume=value)re-enters the graph at the interrupted node, withinterrupt()now returningvalue- A checkpointer is required — without it, there is no state to resume from
thread_idin the config is what links the first call to the resume call — always use a stable, unique ID per session
When NOT to use interrupt(): For high-frequency agents (hundreds of calls/min), every interrupt adds latency and checkpointer I/O. Batch approvals or use async patterns instead. Also avoid interrupt() inside parallel branches (Send API) — behavior is undefined when multiple branches interrupt simultaneously.
Tested on LangGraph 0.2.55, Python 3.12, macOS and Ubuntu 24.04