CrewAI 1.10.1 New Features: What Changed in 2026

CrewAI 1.10.1 ships Gemini GenAI upgrades, A2A Jupyter support, MCP tool fixes, and security patches. Here's what changed and how to upgrade.

What's New in CrewAI 1.10.1

CrewAI 1.10.1 dropped on March 4, 2026. It's a focused release — one headline feature (Gemini GenAI upgrade), several A2A and MCP bug fixes, and a round of security patches. No breaking API changes.

If you're running 1.10.0, upgrade immediately. That version was yanked from PyPI for misbehaving on CrewAI AMP. 1.10.1 is the stable replacement.

You'll learn:

  • What the Gemini GenAI upgrade actually changes
  • How A2A now works in Jupyter and non-main threads
  • Which MCP tool bugs were fixed
  • How to upgrade safely from 1.9.x or earlier

Time: 10 min | Difficulty: Intermediate


Why 1.10.0 Was Yanked

Before diving into features, context matters.

1.10.0 was yanked from PyPI shortly after release because it caused misbehavior when running on CrewAI AMP (the managed platform). The fix landed in 1.10.1 the same day. If you have crewai==1.10.0 pinned anywhere, replace it with 1.10.1 now.

pip show crewai | grep Version
# If you see 1.10.0, upgrade:
pip install --upgrade crewai

Gemini GenAI Upgrade

This is the headline feature. CrewAI upgraded its Gemini provider to the latest google-genai SDK, which enables two things that were broken or missing before:

Parallel function response grouping. When a Gemini model calls multiple tools in one turn, responses were being sent as separate Content objects. This caused parsing failures in some multi-tool workflows. Now they're grouped into a single Content object, which is what the Gemini API expects.

Thinking model thought output. Gemini 2.5 Pro and other "thinking" models emit intermediate reasoning tokens. CrewAI now surfaces these in the agent output instead of silently discarding them. You'll see thought blocks appear in verbose mode.

from crewai import Agent, LLM

agent = Agent(
    role="Research Analyst",
    goal="Analyze market trends",
    backstory="Expert in financial data",
    llm=LLM(
        model="gemini/gemini-2.5-pro",
        temperature=0.2
    ),
    verbose=True  # thought output now visible here
)

Expected output in verbose mode:

[Thought]: I need to first identify the key data points...
[Action]: search_tool
[Result]: ...

A2A: Jupyter and Non-Main Thread Fixes

CrewAI's Agent-to-Agent (A2A) protocol — which lets agents delegate tasks to remote agents over HTTP — had two runtime environment bugs fixed in this release.

Jupyter Support

A2A previously crashed in Jupyter notebooks with:

RuntimeError: This event loop is already running.

The fix detects a running event loop and handles async A2A calls correctly. You can now use A2AClientConfig from a notebook cell without patching the event loop yourself.

# Now works in Jupyter without nest_asyncio workarounds
from crewai import Agent, Crew, Task
from crewai.a2a import A2AClientConfig

agent = Agent(
    role="Research Coordinator",
    goal="Delegate research tasks",
    backstory="Expert at coordinating specialized agents",
    llm="gpt-4o",
    a2a=A2AClientConfig(
        endpoint="https://your-agent.example.com/.well-known/agent-card.json",
        timeout=120,
        max_turns=10
    )
)

Non-Main Thread Telemetry

Signal handlers can only be registered from the main thread in Python. When CrewAI ran inside a thread (common in FastAPI background tasks or Celery workers), telemetry setup raised:

ValueError: signal only works in main thread of the main interpreter

The fix skips signal handler registration when not in the main thread. No code changes needed on your end.


MCP Tool Fixes

Two MCP-related bugs were resolved:

Complex schema $ref pointers. MCP tools that used nested $ref references in their JSON schemas failed to parse correctly. This affected any MCP server that defines shared schema components. The fix resolves $ref pointers before building the tool definition.

Load MCP tools when agent tools are None. If you created an agent without explicit tools and then added MCP server tools at the crew level, those tools weren't being injected. They now load correctly regardless of the agent's initial tool state:

from crewai import Agent, Crew, Task
from crewai_tools import MCPServerAdapter

# This pattern now works — agent starts with tools=None
agent = Agent(
    role="Data Retriever",
    goal="Fetch data using MCP tools",
    backstory="Specialist in structured data retrieval",
    llm="gpt-4o"
    # No tools= here
)

task = Task(
    description="Retrieve the Q1 sales figures",
    expected_output="Sales data as JSON",
    agent=agent
)

# MCP tools inject correctly at crew level
crew = Crew(
    agents=[agent],
    tasks=[task],
    verbose=True
)

Tool Error Handling

Two fixes improved how tool errors flow through the agent loop:

Double event scope pop. A tool error could cause the internal event bus to pop its scope twice, corrupting telemetry and sometimes causing downstream errors. Fixed.

Tool error injection as observations. Previously, a tool error would sometimes halt the agent entirely. Now errors are injected back as observations, letting the agent decide how to recover — retry, use a different tool, or report the failure gracefully.

Name collision resolution. If two tools had the same name (common when mixing MCP and platform tools), one would silently overwrite the other. CrewAI now detects collisions and raises a clear error at startup.


Security: pypdf 6.7.4

pypdf was upgraded from 4.x to 6.7.4, resolving several Dependabot security alerts in the dependency chain. If you use CrewAI's file processing tools (PDFs especially), this is a meaningful patch.

# Verify after upgrade
pip show pypdf | grep Version
# Should show: Version: 6.7.4

How to Upgrade

# Upgrade to 1.10.1
pip install "crewai==1.10.1"

# Or with extras (A2A, tools, Google GenAI)
pip install "crewai[tools,google-genai,a2a]==1.10.1"

# Verify
python -c "import crewai; print(crewai.__version__)"
# Expected: 1.10.1

If you're on uv:

uv add "crewai==1.10.1"

If you're upgrading from 1.9.x or earlier: check the CrewAI migration docs for any breaking changes introduced in 1.x. The 1.0 release restructured A2AConfig — it's now split into A2AClientConfig and A2AServerConfig. The old class still works but logs a deprecation warning.


Verification

# Quick smoke test after upgrade
python -c "
from crewai import Agent, Crew, Task, LLM
print('Import OK')

# Verify Gemini provider loads
llm = LLM(model='gemini/gemini-2.0-flash')
print('Gemini LLM init OK')
"

You should see:

Import OK
Gemini LLM init OK

What You Learned

  • 1.10.0 was yanked — 1.10.1 is the safe version to run
  • The Gemini upgrade fixes parallel tool calls and surfaces thinking model output
  • A2A now works in Jupyter and FastAPI/Celery thread contexts without workarounds
  • MCP tools load correctly even when agent.tools starts as None
  • Tool errors are now injected as observations instead of halting the agent

When to skip this update: If you're not using Gemini models and have no A2A or MCP integrations, this release has no user-facing changes for you. Still worth upgrading for the pypdf security fix.

Tested on CrewAI 1.10.1, Python 3.12, macOS and Ubuntu 24.04