Problem: A Task Fails Halfway Through a Long Crew Run
Your crew ran 6 tasks. Task 5 hit an API timeout. Now you're staring at a failed run — and the only option you know is crew.kickoff(), which restarts everything from scratch.
That wastes time, burns API credits, and re-runs work that already succeeded.
You'll learn:
- How CrewAI's replay feature works and what it stores
- How to resume a crew from any failed task using a kickoff ID
- How to list past runs and target a specific task index
Time: 12 min | Difficulty: Intermediate
Why This Happens
CrewAI stores the output of every task in a local run log tied to a kickoff_id. When a run fails mid-way, that log is preserved. Without replay, you'd have no way to reference it — so you restart from zero.
The replay feature reads that log and picks up execution from the task you specify, injecting the prior tasks' outputs as context.
Symptoms:
- Mid-run exception (
RateLimitError,TimeoutError, tool failure) - Crew exits with a partial result
- Re-running
crew.kickoff()repeats completed tasks unnecessarily
Solution
Step 1: Confirm You're on CrewAI 0.80+
Replay was stabilized in 0.80. Earlier versions have the CLI command but the task indexing is unreliable.
pip show crewai | grep Version
# Expected: Version: 0.80.0 or higher
# Upgrade if needed
pip install --upgrade crewai
Step 2: Run Your Crew Normally First
Replay requires at least one prior kickoff to have generated a run log. If you haven't run the crew yet, do so now.
# main.py
from crewai import Crew, Agent, Task, Process
researcher = Agent(
role="Research Analyst",
goal="Find accurate data on the given topic",
backstory="You are a meticulous analyst with 10 years of experience.",
verbose=True,
)
writer = Agent(
role="Technical Writer",
goal="Produce a clear, structured report",
backstory="You turn dense research into readable summaries.",
verbose=True,
)
research_task = Task(
description="Research the latest benchmarks for open-source LLMs in 2026.",
expected_output="A bullet list of top 5 models with benchmark scores.",
agent=researcher,
)
outline_task = Task(
description="Create a report outline based on the research findings.",
expected_output="A structured outline with 5 sections.",
agent=writer,
)
draft_task = Task(
description="Write the full report from the outline.",
expected_output="A 600-word report in markdown.",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, outline_task, draft_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(result)
python main.py
After a successful (or failed) run, CrewAI logs the kickoff ID in your terminal output:
[2026-03-09 14:22:01][INFO]: Run ID: a3f7c120-84b1-4e2d-b9f3-d90c1a5e6f22
Step 3: List All Past Kickoff IDs
If you missed the ID in the terminal, retrieve it with the CLI:
crewai log-tasks-outputs
Expected output:
Kickoff ID: a3f7c120-84b1-4e2d-b9f3-d90c1a5e6f22
Task 0 [research_task] — ✅ completed
Task 1 [outline_task] — ✅ completed
Task 2 [draft_task] — ❌ failed
This shows exactly which tasks succeeded and where the run broke. Task indices are zero-based.
Step 4: Replay From the Failed Task
Use the CLI to replay starting from task index 2 (the failed one):
# Syntax: crewai replay -n <task_index> --kickoff-id <id>
crewai replay -n 2 --kickoff-id a3f7c120-84b1-4e2d-b9f3-d90c1a5e6f22
CrewAI loads the outputs from tasks 0 and 1 from the run log, then re-executes task 2 forward. You skip the two completed tasks entirely.
If you want to replay from the very beginning of a specific run:
crewai replay -n 0 --kickoff-id a3f7c120-84b1-4e2d-b9f3-d90c1a5e6f22
Step 5: Replay Programmatically (Optional)
If you prefer to trigger replay from Python rather than the CLI — useful for automated retry logic:
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
kickoff_id = "a3f7c120-84b1-4e2d-b9f3-d90c1a5e6f22"
task_index = 2 # resume from here
# CrewAI reads prior task outputs from the run log automatically
# when you pass replay_task_index to kickoff
result = crew.kickoff(
inputs={},
replay_task_index=task_index,
kickoff_id=kickoff_id,
)
print(result)
This is the same operation as the CLI, but lets you wrap it in retry logic or hook it into a monitoring workflow.
Step 6: Handle Inputs That Affect All Tasks
If your crew uses runtime inputs={}, pass the same values on replay. Replay re-uses stored task outputs but still needs inputs for template interpolation in tasks that re-run.
inputs = {
"topic": "open-source LLMs 2026",
"word_count": 600,
}
result = crew.kickoff(
inputs=inputs,
replay_task_index=2,
kickoff_id=kickoff_id,
)
If you pass different inputs, tasks that re-run will use the new values — tasks loaded from the log keep their original outputs.
Verification
After replay finishes, confirm the run completed all tasks:
crewai log-tasks-outputs
You should see all tasks marked ✅ completed under the same kickoff ID.
To confirm the output is correct, inspect the final task result:
# The return value of kickoff() is the last task's output
print(result.raw)
What You Learned
- CrewAI logs task outputs per kickoff ID — replay reads this log to skip completed work
crewai log-tasks-outputsshows task status per run so you can target the right index- Task indices are zero-based; replay re-executes the specified index and all tasks after it
- Programmatic replay via
replay_task_indexenables automated retry pipelines
Limitation: Replay only works for sequential crews (Process.sequential). Hierarchical process crews with a manager agent don't yet support mid-run replay — the manager re-orchestrates task order dynamically, so there's no stable index to resume from.
When NOT to use replay: If the failure was caused by bad agent output in an earlier task (not a transient error), replay from the failed task will likely fail again. In that case, fix the task description or agent config, then replay from the task that produced bad output.
Tested on CrewAI 0.80.0, Python 3.12, macOS and Ubuntu 24.04