Skip to content

Commit 3fb6b5a

Browse files
niilooylnhsingh
andauthored
(docs): add examples on recursion counter access and proactive handling in graph-api (#1166)
## Overview Expanded docs for accessing and handling the recursion counter in LangGraph graphs, enabling developers to implement proactive recursion management before hitting limits. ## Type of change **Type:** Update existing documentation ## Checklist - [x] I have read the [contributing guidelines](README.md) - [x] I have tested my changes locally using `docs dev` - [x] All code examples have been tested and work correctly - [x] I have used **root relative** paths for internal links - [n/a] I have updated navigation in `src/docs.json` if needed - I have gotten approval from the relevant reviewers ## Additional notes This PR extends the existing "Recursion limit" section in the Graph API documentation with detailed guidance on accessing `config.metadata.langgraph_step` and implementing proactive recursion handling. ### What's Added **1. How it works** - Explains where the step counter is stored and how the recursion limit check logic works in both Python (`config["metadata"]["langgraph_step"]`) and TypeScript (`config.metadata.langgraph_step`). **2. Accessing the current step counter** - Simple code examples showing how to read the step counter within node functions. **3. Proactive recursion handling** - Complete working examples demonstrating: - Checking if approaching the limit (e.g., 80% threshold) - Routing to fallback nodes before hitting the limit - Full graph setup with conditional edges for graceful degradation **4. Proactive vs reactive approaches** - Side-by-side comparison including: - Code examples of both proactive monitoring and reactive error catching - Comparison table highlighting detection timing, handling location, and control flow differences - Lists of advantages for each approach with recommendation for proactive handling **5. Other available metadata** - Documents additional metadata fields available in config (node, triggers, path, checkpoint namespace). ### Motivation Currently, the documentation explains the recursion limit configuration but doesn't cover how developers can access the current step counter or implement proactive handling strategies. This leads developers to only discover reactive error handling (catching `GraphRecursionError`) rather than implementing graceful degradation patterns within their graphs. This addition enables better user experiences by allowing graphs to complete normally with partial results rather than throwing exceptions. Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
1 parent 1d73a8f commit 3fb6b5a

File tree

1 file changed

+329
-0
lines changed

1 file changed

+329
-0
lines changed

src/oss/langgraph/graph-api.mdx

Lines changed: 329 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1121,6 +1121,335 @@ await graph.invoke(inputs, {
11211121

11221122
:::
11231123

1124+
### Accessing and handling the recursion counter
1125+
1126+
:::python
1127+
The current step counter is accessible in `config["metadata"]["langgraph_step"]` within any node, allowing for proactive recursion handling before hitting the recursion limit. This enables you to implement graceful degradation strategies within your graph logic.
1128+
:::
1129+
1130+
:::js
1131+
The current step counter is accessible in `config.metadata.langgraph_step` within any node, allowing for proactive recursion handling before hitting the recursion limit. This enables you to implement graceful degradation strategies within your graph logic.
1132+
:::
1133+
1134+
#### How it works
1135+
1136+
:::python
1137+
1138+
The step counter is stored in `config["metadata"]["langgraph_step"]`. The recursion limit check follows the logic: `step > stop` where `stop = step + recursion_limit + 1`. When the limit is exceeded, LangGraph raises a `GraphRecursionError`.
1139+
1140+
:::
1141+
1142+
:::js
1143+
1144+
The step counter is stored in `config.metadata.langgraph_step`. The recursion limit check follows the logic: `step > stop` where `stop = step + recursionLimit + 1`. When the limit is exceeded, LangGraph raises a `GraphRecursionError`.
1145+
1146+
:::
1147+
1148+
#### Accessing the current step counter
1149+
1150+
You can access the current step counter within any node to monitor execution progress.
1151+
1152+
:::python
1153+
1154+
```python
1155+
from langchain_core.runnables import RunnableConfig
1156+
from langgraph.graph import StateGraph
1157+
1158+
def my_node(state: dict, config: RunnableConfig) -> dict:
1159+
current_step = config["metadata"]["langgraph_step"]
1160+
print(f"Currently on step: {current_step}")
1161+
return state
1162+
```
1163+
1164+
:::
1165+
1166+
:::js
1167+
1168+
```typescript
1169+
import { RunnableConfig } from "@langchain/core/runnables";
1170+
import { StateGraph } from "@langchain/langgraph";
1171+
1172+
async function myNode(state: any, config: RunnableConfig): Promise<any> {
1173+
const currentStep = config.metadata?.langgraph_step;
1174+
console.log(`Currently on step: ${currentStep}`);
1175+
return state;
1176+
}
1177+
```
1178+
1179+
:::
1180+
1181+
#### Proactive recursion handling
1182+
1183+
You can check the step counter and proactively route to a different node before hitting the limit. This allows for graceful degradation within your graph.
1184+
1185+
:::python
1186+
1187+
```python
1188+
from langchain_core.runnables import RunnableConfig
1189+
from langgraph.graph import StateGraph, END
1190+
1191+
def reasoning_node(state: dict, config: RunnableConfig) -> dict:
1192+
current_step = config["metadata"]["langgraph_step"]
1193+
recursion_limit = config["recursion_limit"] # always present, defaults to 25
1194+
1195+
# Check if we're approaching the limit (e.g., 80% threshold)
1196+
if current_step >= recursion_limit * 0.8:
1197+
return {
1198+
**state,
1199+
"route_to": "fallback",
1200+
"reason": "Approaching recursion limit"
1201+
}
1202+
1203+
# Normal processing
1204+
return {"messages": state["messages"] + ["thinking..."]}
1205+
1206+
def fallback_node(state: dict, config: RunnableConfig) -> dict:
1207+
"""Handle cases where recursion limit is approaching"""
1208+
return {
1209+
**state,
1210+
"messages": state["messages"] + ["Reached complexity limit, providing best effort answer"]
1211+
}
1212+
1213+
def route_based_on_state(state: dict) -> str:
1214+
if state.get("route_to") == "fallback":
1215+
return "fallback"
1216+
elif state.get("done"):
1217+
return END
1218+
return "reasoning"
1219+
1220+
# Build graph
1221+
graph = StateGraph(dict)
1222+
graph.add_node("reasoning", reasoning_node)
1223+
graph.add_node("fallback", fallback_node)
1224+
graph.add_conditional_edges("reasoning", route_based_on_state)
1225+
graph.add_edge("fallback", END)
1226+
graph.set_entry_point("reasoning")
1227+
1228+
app = graph.compile()
1229+
```
1230+
1231+
:::
1232+
1233+
:::js
1234+
1235+
```typescript
1236+
import { RunnableConfig } from "@langchain/core/runnables";
1237+
import { StateGraph, END } from "@langchain/langgraph";
1238+
1239+
interface State {
1240+
messages: string[];
1241+
route_to?: string;
1242+
reason?: string;
1243+
done?: boolean;
1244+
}
1245+
1246+
async function reasoningNode(
1247+
state: State,
1248+
config: RunnableConfig
1249+
): Promise<Partial<State>> {
1250+
const currentStep = config.metadata?.langgraph_step ?? 0;
1251+
const recursionLimit = config.recursionLimit!; // always present, defaults to 25
1252+
1253+
// Check if we're approaching the limit (e.g., 80% threshold)
1254+
if (currentStep >= recursionLimit * 0.8) {
1255+
return {
1256+
...state,
1257+
route_to: "fallback",
1258+
reason: "Approaching recursion limit"
1259+
};
1260+
}
1261+
1262+
// Normal processing
1263+
return {
1264+
messages: [...state.messages, "thinking..."]
1265+
};
1266+
}
1267+
1268+
async function fallbackNode(
1269+
state: State,
1270+
config: RunnableConfig
1271+
): Promise<Partial<State>> {
1272+
return {
1273+
...state,
1274+
messages: [
1275+
...state.messages,
1276+
"Reached complexity limit, providing best effort answer"
1277+
]
1278+
};
1279+
}
1280+
1281+
function routeBasedOnState(state: State): string {
1282+
if (state.route_to === "fallback") {
1283+
return "fallback";
1284+
} else if (state.done) {
1285+
return END;
1286+
}
1287+
return "reasoning";
1288+
}
1289+
1290+
// Build graph
1291+
const graph = new StateGraph<State>({ channels: {} })
1292+
.addNode("reasoning", reasoningNode)
1293+
.addNode("fallback", fallbackNode)
1294+
.addConditionalEdges("reasoning", routeBasedOnState)
1295+
.addEdge("fallback", END);
1296+
1297+
const app = graph.compile();
1298+
```
1299+
1300+
:::
1301+
1302+
#### Proactive vs reactive approaches
1303+
1304+
There are two main approaches to handling recursion limits: proactive (monitoring within the graph) and reactive (catching errors externally).
1305+
1306+
:::python
1307+
1308+
```python
1309+
from langchain_core.runnables import RunnableConfig
1310+
from langgraph.graph import StateGraph, END
1311+
from langgraph.errors import GraphRecursionError
1312+
1313+
# Proactive Approach (recommended)
1314+
def agent_with_monitoring(state: dict, config: RunnableConfig) -> dict:
1315+
"""Proactively monitor and handle recursion within the graph"""
1316+
current_step = config["metadata"]["langgraph_step"]
1317+
recursion_limit = config["recursion_limit"]
1318+
1319+
# Early detection - route to internal handling
1320+
if current_step >= recursion_limit - 2: # 2 steps before limit
1321+
return {
1322+
**state,
1323+
"status": "recursion_limit_approaching",
1324+
"final_answer": "Reached iteration limit, returning partial result"
1325+
}
1326+
1327+
# Normal processing
1328+
return {"messages": state["messages"] + [f"Step {current_step}"]}
1329+
1330+
# Reactive Approach (fallback)
1331+
try:
1332+
result = graph.invoke(initial_state, {"recursion_limit": 10})
1333+
except GraphRecursionError as e:
1334+
# Handle externally after graph execution fails
1335+
result = fallback_handler(initial_state)
1336+
```
1337+
1338+
:::
1339+
1340+
:::js
1341+
1342+
```typescript
1343+
import { RunnableConfig } from "@langchain/core/runnables";
1344+
import { StateGraph, END } from "@langchain/langgraph";
1345+
import { GraphRecursionError } from "@langchain/langgraph";
1346+
1347+
interface State {
1348+
messages: string[];
1349+
status?: string;
1350+
final_answer?: string;
1351+
}
1352+
1353+
// Proactive Approach (recommended)
1354+
async function agentWithMonitoring(
1355+
state: State,
1356+
config: RunnableConfig
1357+
): Promise<Partial<State>> {
1358+
const currentStep = config.metadata?.langgraph_step ?? 0;
1359+
const recursionLimit = config.recursionLimit!;
1360+
1361+
// Early detection - route to internal handling
1362+
if (currentStep >= recursionLimit - 2) { // 2 steps before limit
1363+
return {
1364+
...state,
1365+
status: "recursion_limit_approaching",
1366+
final_answer: "Reached iteration limit, returning partial result"
1367+
};
1368+
}
1369+
1370+
// Normal processing
1371+
return {
1372+
messages: [...state.messages, `Step ${currentStep}`]
1373+
};
1374+
}
1375+
1376+
// Reactive Approach (fallback)
1377+
try {
1378+
const result = await graph.invoke(initialState, { recursionLimit: 10 });
1379+
} catch (error) {
1380+
if (error instanceof GraphRecursionError) {
1381+
// Handle externally after graph execution fails
1382+
const result = await fallbackHandler(initialState);
1383+
}
1384+
}
1385+
```
1386+
1387+
:::
1388+
1389+
The key differences between these approaches are:
1390+
1391+
| Approach | Detection | Handling | Control Flow |
1392+
|----------|-----------|----------|--------------|
1393+
| Proactive (using `langgraph_step`) | Before limit reached | Inside graph via conditional routing | Graph continues to completion node |
1394+
| Reactive (catching `GraphRecursionError`) | After limit exceeded | Outside graph in try/catch | Graph execution terminated |
1395+
1396+
**Proactive advantages:**
1397+
1398+
- Graceful degradation within the graph
1399+
- Can save intermediate state in checkpoints
1400+
- Better user experience with partial results
1401+
- Graph completes normally (no exception)
1402+
1403+
**Reactive advantages:**
1404+
1405+
- Simpler implementation
1406+
- No need to modify graph logic
1407+
- Centralized error handling
1408+
1409+
#### Other available metadata
1410+
1411+
:::python
1412+
1413+
Along with `langgraph_step`, the following metadata is also available in `config["metadata"]`:
1414+
1415+
```python
1416+
def inspect_metadata(state: dict, config: RunnableConfig) -> dict:
1417+
metadata = config["metadata"]
1418+
1419+
print(f"Step: {metadata['langgraph_step']}")
1420+
print(f"Node: {metadata['langgraph_node']}")
1421+
print(f"Triggers: {metadata['langgraph_triggers']}")
1422+
print(f"Path: {metadata['langgraph_path']}")
1423+
print(f"Checkpoint NS: {metadata['langgraph_checkpoint_ns']}")
1424+
1425+
return state
1426+
```
1427+
1428+
:::
1429+
1430+
:::js
1431+
1432+
Along with `langgraph_step`, the following metadata is also available in `config.metadata`:
1433+
1434+
```typescript
1435+
async function inspectMetadata(
1436+
state: any,
1437+
config: RunnableConfig
1438+
): Promise<any> {
1439+
const metadata = config.metadata;
1440+
1441+
console.log(`Step: ${metadata?.langgraph_step}`);
1442+
console.log(`Node: ${metadata?.langgraph_node}`);
1443+
console.log(`Triggers: ${metadata?.langgraph_triggers}`);
1444+
console.log(`Path: ${metadata?.langgraph_path}`);
1445+
console.log(`Checkpoint NS: ${metadata?.langgraph_checkpoint_ns}`);
1446+
1447+
return state;
1448+
}
1449+
```
1450+
1451+
:::
1452+
11241453
## Visualization
11251454

11261455
It's often nice to be able to visualize graphs, especially as they get more complex. LangGraph comes with several built-in ways to visualize graphs. See [this how-to guide](/oss/langgraph/use-graph-api#visualize-your-graph) for more info.

0 commit comments

Comments
 (0)