CrewAI Hierarchical Process: Manager and Worker Agents

Build a CrewAI hierarchical agent system with a manager that delegates tasks to specialized workers. Full code, patterns, and pitfalls.

Problem: Your CrewAI Agents Work in Parallel But Nobody's in Charge

The default Process.sequential runs agents one after another. That works for pipelines. It breaks down when tasks are interdependent, dynamic, or need a coordinator that can re-delegate based on results.

The hierarchical process gives you a manager agent that reads the goal, breaks it into subtasks, dispatches them to worker agents, and synthesizes the final output — without you hardcoding the flow.

You'll learn:

  • How to configure Process.hierarchical with a manager LLM
  • How to define worker agents the manager can delegate to
  • How task routing actually works under the hood
  • When hierarchical beats sequential (and when it doesn't)

Time: 20 min | Difficulty: Intermediate


Why This Happens

Sequential crews execute tasks in order — agent 1 finishes, then agent 2 starts. There's no coordination layer. If agent 2's work depends on which path agent 1 took, sequential falls apart.

Hierarchical process adds a manager agent that CrewAI creates automatically (or you define yourself). The manager receives the goal, decides which worker agents to use, in what order, and with what inputs. Workers report back. The manager decides if the result is sufficient or if re-delegation is needed.

Symptoms that you need hierarchical:

  • Tasks are conditionally ordered (do X only if Y found something)
  • You have specialized agents that shouldn't always run
  • A single agent is doing too much because there's no coordinator

How the Hierarchical Process Works

Before writing code, here's the execution model:

User Goal
    │
    ▼
Manager Agent (LLM-powered)
    │
    ├──delegates──▶ Research Agent ──result──▶ Manager
    │
    ├──delegates──▶ Analysis Agent ──result──▶ Manager
    │
    └──delegates──▶ Writer Agent ──result──▶ Manager
                                                │
                                                ▼
                                        Final Output

The manager isn't a human-in-the-loop. It's an LLM call with a system prompt that says: "You are a manager. Here are your workers and their capabilities. Delegate to achieve the goal." CrewAI handles the tool calling that lets the manager invoke workers.


Solution

Step 1: Install CrewAI with Tools

# CrewAI 0.80+ required for stable hierarchical support
pip install "crewai[tools]>=0.80.0"

# Verify
python -c "import crewai; print(crewai.__version__)"

Expected: 0.80.x or higher


Step 2: Define Your Worker Agents

Workers are standard CrewAI agents. The key is writing role and goal clearly — the manager LLM reads these to decide who to delegate to.

from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, FileReadTool

search_tool = SerperDevTool()

# Worker 1: Research specialist
researcher = Agent(
    role="Senior Research Analyst",
    goal="Find accurate, up-to-date information on any topic using web search",
    backstory=(
        "You specialize in finding primary sources and synthesizing information "
        "quickly. You always cite where data comes from."
    ),
    tools=[search_tool],
    verbose=True,
    allow_delegation=False,  # Workers don't sub-delegate
)

# Worker 2: Data analyst
analyst = Agent(
    role="Data Analyst",
    goal="Analyze research findings and identify patterns, risks, and opportunities",
    backstory=(
        "You take raw research and extract actionable insights. "
        "You flag contradictions and data quality issues."
    ),
    verbose=True,
    allow_delegation=False,
)

# Worker 3: Technical writer
writer = Agent(
    role="Technical Writer",
    goal="Produce clear, structured reports from analysis results",
    backstory=(
        "You turn complex analysis into readable output. "
        "You use headers, bullet points, and concrete examples."
    ),
    verbose=True,
    allow_delegation=False,
)

Critical: Set allow_delegation=False on workers. If workers can delegate, you get infinite delegation loops. Only the manager should delegate.


Step 3: Define Tasks (Keep Them Loosely Scoped)

In hierarchical mode, tasks describe the overall goal — not the exact steps. The manager fills in the steps.

research_task = Task(
    description=(
        "Research the current state of {topic}. "
        "Find: key players, recent developments in the last 6 months, "
        "and 3 concrete data points with sources."
    ),
    expected_output=(
        "A structured research brief with: summary (3 sentences), "
        "key players (list), recent developments (list with dates), "
        "data points (each with source URL)."
    ),
    agent=researcher,  # Suggested agent — manager can override
)

analysis_task = Task(
    description=(
        "Analyze the research brief on {topic}. "
        "Identify the top 3 opportunities and top 2 risks. "
        "Flag any data that seems unreliable."
    ),
    expected_output=(
        "Analysis report with: opportunities (3, each with rationale), "
        "risks (2, each with mitigation suggestion), "
        "data quality notes."
    ),
    agent=analyst,
    context=[research_task],  # Receives researcher's output
)

writing_task = Task(
    description=(
        "Write a 500-word executive brief on {topic} "
        "based on the research and analysis. "
        "Audience: non-technical decision makers."
    ),
    expected_output=(
        "Executive brief with: headline, 3-paragraph summary, "
        "opportunities section, risks section, recommended next steps."
    ),
    agent=writer,
    context=[research_task, analysis_task],
)

Step 4: Assemble the Hierarchical Crew

import os
from langchain_openai import ChatOpenAI

# The manager LLM — use a capable model here
# GPT-4o and Claude 3.5 Sonnet both work well as managers
manager_llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0.1,  # Low temp = more consistent delegation decisions
    api_key=os.environ["OPENAI_API_KEY"],
)

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    process=Process.hierarchical,  # This is the key change
    manager_llm=manager_llm,       # Required for hierarchical
    verbose=True,
    memory=True,                   # Lets manager remember context across delegations
)

# Run it
result = crew.kickoff(inputs={"topic": "open-source LLM inference frameworks 2026"})
print(result)

Step 5: Use a Custom Manager Agent (Optional)

Instead of the auto-generated manager, you can define your own. This gives you control over the manager's persona and tools.

# Custom manager — gets a planning tool by default via CrewAI internals
manager_agent = Agent(
    role="Project Manager",
    goal=(
        "Coordinate research, analysis, and writing agents to produce "
        "a complete, accurate executive brief. Verify each deliverable "
        "meets the expected output before moving on."
    ),
    backstory=(
        "You've managed technical research projects for 10 years. "
        "You know when to push for more detail and when to move forward."
    ),
    llm=manager_llm,
    allow_delegation=True,  # Manager MUST have delegation enabled
    verbose=True,
)

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    process=Process.hierarchical,
    manager_agent=manager_agent,  # Use manager_agent instead of manager_llm
    verbose=True,
    memory=True,
)

Use manager_llm for quick setup. Use manager_agent when you need the manager to follow specific instructions or have a defined persona.


Verification

# Test with a simple topic first to verify delegation is happening
test_crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.hierarchical,
    manager_llm=manager_llm,
    verbose=True,
)

result = test_crew.kickoff(inputs={"topic": "Python uv package manager"})

# In the verbose output, look for:
# "Manager delegating to: Senior Research Analyst"
# "Manager reviewing output from: Senior Research Analyst"
# "Manager delegating to: Technical Writer"
print(result.raw)

You should see delegation lines in the verbose output. If you see agents running sequentially without manager logs, check that process=Process.hierarchical is set and manager_llm or manager_agent is provided.

If it fails:

  • ValueError: manager_llm is required → Add manager_llm= to your Crew
  • Infinite loop / max iterations hit → Check workers have allow_delegation=False
  • Manager skips agents → Make agent role and goal more descriptive so the manager knows when to use them

Hierarchical vs Sequential: When to Use Each

ScenarioUse
Fixed pipeline, always same ordersequential
Tasks depend on previous task's contenthierarchical
Some agents only needed conditionallyhierarchical
Debugging — you need predictable flowsequential
Multi-domain tasks needing coordinationhierarchical
Simple 2-agent handoffsequential

Hierarchical adds latency because the manager makes extra LLM calls. For straightforward pipelines, sequential is faster and cheaper. Use hierarchical when you need the routing intelligence.


Production Considerations

Token cost: Each manager decision is an LLM call. A 3-worker crew with 3 tasks generates roughly 6–9 manager calls on top of worker calls. Budget accordingly.

Manager model choice: The manager needs strong instruction-following and tool-use capability. GPT-4o mini works for simple crews. For complex delegation, use GPT-4o or Claude 3.5 Sonnet. A weak manager produces circular or incomplete delegation.

Memory: Set memory=True in production. Without it, the manager re-reads the full task list on every delegation decision. Memory caches context and reduces cost ~30%.

Max iterations: CrewAI sets a default max delegation limit. If the manager gets stuck, it hits the limit and returns a partial result. Increase with max_rpm and max_iter on the Crew if your tasks are complex.


What You Learned

  • Process.hierarchical + manager_llm is the minimum config to enable manager-worker routing
  • Set allow_delegation=False on all worker agents to prevent delegation loops
  • Clear role and goal descriptions on workers are what the manager uses to route tasks
  • Custom manager agents give you persona and instruction control; auto-generated managers are faster to set up
  • Hierarchical adds LLM call overhead — use it when routing logic justifies the cost

Tested on CrewAI 0.80.4, Python 3.12, OpenAI GPT-4o as manager LLM