Problem: Your LangGraph Agent Can't Call Your Own Code
You've built a LangGraph agent, but it only uses built-in tools. You want it to call your own Python functions — a database query, an internal API, a custom calculation — and you're not sure how to wire that in cleanly.
The ToolNode class makes this straightforward, but the setup involves a few non-obvious steps: decorating functions correctly, binding tools to the LLM, and routing graph edges so the agent loops until it's done.
You'll learn:
- How to turn any Python function into a LangGraph-compatible tool
- How to configure
ToolNodeand connect it in your graph - How to handle tool errors without crashing the agent loop
Time: 20 min | Difficulty: Intermediate
Why ToolNode Exists
LangGraph separates decision-making (the LLM node) from action-taking (the tool node). The LLM outputs a tool_call message; ToolNode intercepts it, runs the matching function, and returns the result as a ToolMessage.
Without this separation, you'd have to manually parse tool call JSON, dispatch to functions, and format results — every time. ToolNode handles all of that.
User message
│
▼
[agent] ──── tool_call? ──── Yes ──▶ [tools] ──▶ back to [agent]
│
No (final answer)
▼
END
Solution
Step 1: Install Dependencies
# LangGraph 0.2+ required for ToolNode
pip install langgraph langchain-core langchain-openai
Verify:
python -c "import langgraph; print(langgraph.__version__)"
# Expected: 0.2.x or higher
Step 2: Define Custom Tools with @tool
The @tool decorator from langchain_core does three things: registers the function name, uses the docstring as the tool description the LLM sees, and validates inputs via type hints.
from langchain_core.tools import tool
@tool
def get_product_price(product_id: str) -> str:
"""Look up the current price of a product by its ID.
Returns the price as a formatted string, e.g. '$29.99'.
Returns 'not found' if the product ID does not exist.
"""
# Replace with your real data source
prices = {
"prod_001": "$29.99",
"prod_002": "$149.00",
"prod_003": "$9.99",
}
return prices.get(product_id, "not found")
@tool
def calculate_discount(price: str, discount_percent: float) -> str:
"""Calculate the final price after applying a percentage discount.
Args:
price: Price string like '$29.99'
discount_percent: Discount as a float, e.g. 20.0 for 20%
Returns the discounted price as a formatted string.
"""
# Strip the dollar sign and compute
amount = float(price.replace("$", ""))
discounted = amount * (1 - discount_percent / 100)
return f"${discounted:.2f}"
Two rules for good tool docstrings:
- Describe what the tool does, not what its code contains — the LLM reads this to decide when to use it.
- Document all edge cases in the return value (e.g., "Returns 'not found' if...") so the LLM can reason about failures.
Step 3: Bind Tools to the LLM
The LLM needs to know which tools exist before it can emit tool_call messages. .bind_tools() injects the tool schemas into every call.
from langchain_openai import ChatOpenAI
tools = [get_product_price, calculate_discount]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# bind_tools attaches JSON schemas so the LLM knows what to call
llm_with_tools = llm.bind_tools(tools)
Step 4: Build the Agent Node and ToolNode
from langgraph.graph import StateGraph, MessagesState, END
from langgraph.prebuilt import ToolNode
# ToolNode takes the same tools list — it handles dispatch automatically
tool_node = ToolNode(tools)
def agent_node(state: MessagesState):
"""Call the LLM with the current message history."""
response = llm_with_tools.invoke(state["messages"])
# Return the new message to append to state
return {"messages": [response]}
MessagesState is a built-in LangGraph state schema that manages a messages list for you. No custom state class needed for most tool-calling agents.
Step 5: Add Conditional Routing
The agent loops back to itself after each tool call. It only exits when the LLM returns a plain message with no tool calls.
def should_continue(state: MessagesState) -> str:
"""Route to tool_node if there are tool calls, otherwise end."""
last_message = state["messages"][-1]
# AIMessage has a tool_calls attribute when the LLM wants to call a tool
if last_message.tool_calls:
return "tools"
return END
Step 6: Assemble the Graph
graph_builder = StateGraph(MessagesState)
# Add nodes
graph_builder.add_node("agent", agent_node)
graph_builder.add_node("tools", tool_node)
# Set entry point
graph_builder.set_entry_point("agent")
# agent → tools (if tool calls) or END (if done)
graph_builder.add_conditional_edges(
"agent",
should_continue,
{"tools": "tools", END: END},
)
# tools always routes back to agent
graph_builder.add_edge("tools", "agent")
graph = graph_builder.compile()
Step 7: Run the Agent
from langchain_core.messages import HumanMessage
result = graph.invoke({
"messages": [HumanMessage(content="What's the price of prod_001? Apply a 15% discount.")]
})
# Print the final assistant message
print(result["messages"][-1].content)
Expected output:
The price of prod_001 is $29.99. After applying a 15% discount, the final price is $25.49.
The agent made two tool calls in sequence — get_product_price, then calculate_discount — before producing the final answer.
Handling Tool Errors
By default, a Python exception inside a tool crashes the entire graph invocation. Use handle_tool_errors=True to catch errors and return them as ToolMessage content instead, so the LLM can recover.
# ToolNode will catch exceptions and return the error string to the LLM
# instead of raising, letting the agent decide how to respond
tool_node = ToolNode(tools, handle_tool_errors=True)
For custom error messages per tool, raise a ToolException:
from langchain_core.tools import tool, ToolException
@tool
def get_product_price(product_id: str) -> str:
"""Look up the current price of a product by its ID."""
prices = {"prod_001": "$29.99"}
if product_id not in prices:
# This message is returned to the LLM as the tool result
raise ToolException(f"Product '{product_id}' does not exist in the catalog.")
return prices[product_id]
Verification
Run this end-to-end test to confirm your graph works:
from langchain_core.messages import HumanMessage
test_cases = [
"What is the price of prod_002?",
"What is the price of prod_999?", # Tests error handling
"What is prod_003's price after a 50% discount?",
]
for query in test_cases:
result = graph.invoke({"messages": [HumanMessage(content=query)]})
print(f"Q: {query}")
print(f"A: {result['messages'][-1].content}\n")
You should see: Three responses — a price lookup, a graceful "not found" message, and a discount calculation.
Production Considerations
Async tools: For I/O-bound tools (HTTP calls, DB queries), define them with async def and use graph.ainvoke(). ToolNode supports async tools natively.
@tool
async def fetch_live_price(product_id: str) -> str:
"""Fetch the live price from the pricing API."""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.example.com/prices/{product_id}")
return response.json()["price"]
Token costs: Each tool result is added to the message history. Long tool outputs (e.g., raw API responses) inflate context fast. Trim or summarize tool outputs before returning them.
Tool timeouts: ToolNode does not enforce timeouts. Wrap long-running tools with asyncio.wait_for or a thread timeout to prevent the agent from hanging indefinitely.
What You Learned
@toolturns any Python function into a schema-aware LangGraph tool using its type hints and docstringToolNodehandles all dispatch, result formatting, and error catching — no manual JSON parsing needed- The
should_continueconditional edge creates the ReAct loop: the agent runs until it produces a message with no tool calls handle_tool_errors=Trueprevents tool exceptions from crashing the graph and lets the LLM recover gracefully
When NOT to use ToolNode: If your agent only ever calls one tool deterministically, a direct edge from the agent node to a custom function node is simpler. ToolNode is built for dynamic dispatch where the LLM chooses which tool to call.
Tested on LangGraph 0.2.28, LangChain Core 0.3.x, Python 3.12, OpenAI gpt-4o-mini