Win RoboCup 2026: The Software Playbook

Practical software strategies for RoboCup 2026 competitors — vision, motion planning, and real-time decision-making that win matches.

Problem: Your Robot Is Smart but Still Losing

You've got working code. Your robot navigates, sees the ball, and shoots — but your team keeps getting knocked out in the group stage. The gap between "functional" and "competition-winning" is almost always software architecture.

You'll learn:

  • How to structure a ROS2-based perception-action pipeline that survives match chaos
  • Motion planning strategies that beat rule-based bots
  • Real-time decision loops that scale across all major RoboCup leagues

Time: 30 min | Level: Advanced


Why This Happens

Most teams nail individual components in isolation. The problem is integration under pressure: noisy cameras, partial occlusion, latency spikes, and opponents that adapt. Your architecture needs to handle all of this at once, every 33ms.

Common symptoms:

  • Robot freezes when ball leaves camera frame
  • Motion planning recomputes every tick, causing oscillation
  • Decision logic works in practice but collapses in dynamic matches

Solution

Step 1: Lock In Your Perception Pipeline

The most common failure point. Don't run object detection on raw frames — preprocess aggressively.

# perception/ball_tracker.py

import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from geometry_msgs.msg import PointStamped
import cv2, numpy as np

class BallTracker(Node):
    def __init__(self):
        super().__init__('ball_tracker')
        self.sub = self.create_subscription(Image, '/camera/raw', self.callback, 10)
        self.pub = self.create_publisher(PointStamped, '/ball/position', 10)
        
        # Kalman filter: predicts ball position when occluded
        self.kf = cv2.KalmanFilter(4, 2)
        self.kf.measurementMatrix = np.eye(2, 4, dtype=np.float32)
        self.kf.transitionMatrix = np.array(
            [[1,0,1,0],[0,1,0,1],[0,0,1,0],[0,0,0,1]], dtype=np.float32
        )
        self.kf.processNoiseCov = np.eye(4, dtype=np.float32) * 0.03
        self.last_seen = None

    def callback(self, msg):
        frame = self.decode(msg)
        detected = self.detect_ball(frame)

        if detected:
            self.kf.correct(np.array(detected, dtype=np.float32))
            self.last_seen = self.get_clock().now()

        # Always predict — even when ball is out of view
        predicted = self.kf.predict()
        self.publish_position(predicted[:2])

Expected: /ball/position publishes at camera framerate even when the ball is temporarily hidden.

If it fails:

  • Jitter on detection: Increase processNoiseCov to smooth predictions
  • Lag under occlusion: Confirm Kalman is initialized with at least 5 real detections before relying on predictions

Step 2: Build a Behavior Tree, Not a State Machine

State machines break when unexpected states stack. Behavior Trees (BTs) degrade gracefully and are composable — critical for multi-robot coordination.

Use py_trees or BehaviorTree.CPP (C++ recommended for latency-sensitive loops).

// behaviors/attack_sequence.cpp
// BehaviorTree.CPP v4

#include "behaviortree_cpp/bt_factory.h"

BT::NodeStatus ApproachBall::tick() {
    auto ball_pos = getInput<geometry_msgs::msg::Point>("ball_position");
    if (!ball_pos) return BT::NodeStatus::FAILURE;

    double dist = euclidean(robot_pose_, ball_pos.value());

    if (dist < 0.3) return BT::NodeStatus::SUCCESS;  // Close enough to kick

    // Move toward ball — motion planner handles obstacle avoidance
    nav_goal_.target_pose = to_pose(ball_pos.value());
    nav_client_->sendGoal(nav_goal_);
    return BT::NodeStatus::RUNNING;
}

Structure your tree like this:

Root (Sequence)
├── Selector: "Have possession?"
│   ├── IsBallControlled
│   └── Sequence: "Get ball"
│       ├── ApproachBall
│       └── InterceptBall
└── Selector: "Attack"
    ├── Shoot (if angle > 15°)
    └── Dribble (fallback)

Why this works: Each node returns SUCCESS, FAILURE, or RUNNING. The tree ticks every cycle. No transition logic to maintain.


Step 3: Motion Planning with DWA + Obstacle Prediction

Nav2's default DWA planner works but doesn't anticipate moving obstacles. Add a velocity obstacle layer.

# planner/velocity_obstacle.py

def compute_vo(robot_vel, obstacle_pos, obstacle_vel, time_horizon=2.0):
    """
    Returns forbidden velocity region for robot to avoid collision
    with a moving obstacle within time_horizon seconds.
    """
    rel_pos = obstacle_pos - robot_pos
    rel_vel = robot_vel - obstacle_vel

    # Cone apex at relative position
    # Cone angle determined by combined radii / distance
    combined_radius = ROBOT_RADIUS + OBSTACLE_RADIUS
    dist = np.linalg.norm(rel_pos)
    
    if dist < combined_radius:
        return None  # Already overlapping — stop

    half_angle = np.arcsin(combined_radius / dist)
    cone_direction = rel_pos / dist

    # Check if current relative velocity falls inside cone
    angle_to_vel = angle_between(rel_vel, cone_direction)
    return angle_to_vel < half_angle  # True = collision predicted

Integrate this as a custom Nav2 controller plugin to get real-time opponent avoidance without replanning from scratch each cycle.


Step 4: Multi-Robot Coordination via Shared Blackboard

For Standard Platform League and Mid-Size: robots must coordinate without collision and without duplicating tasks.

# coordination/team_blackboard.py
# Runs on a dedicated ROS2 node, publishes to all teammates via DDS

from dataclasses import dataclass
from typing import Dict
import rclpy

@dataclass
class RobotRole:
    robot_id: int
    role: str          # "attacker" | "defender" | "goalkeeper"
    target: tuple      # (x, y) world coordinates
    confidence: float  # 0.0–1.0 — lower = yield to other robot

class TeamBlackboard(Node):
    def __init__(self):
        super().__init__('team_blackboard')
        self.roles: Dict[int, RobotRole] = {}
        self.timer = self.create_timer(0.1, self.rebalance_roles)  # 10Hz

    def rebalance_roles(self):
        # Assign roles based on field position + game state
        # Defender: robot nearest own goal
        # Attacker: robot nearest ball
        # Goalkeeper: fixed assignment, overridden only on penalty
        ...

Why 10Hz: Role rebalancing doesn't need to run at perception speed. 10Hz prevents flip-flopping while staying responsive to game state changes.


Verification

# Run your full stack against the SimSpark simulator
roslaunch your_team sim_match.launch opponent:=base_team rounds:=5

# Check latency budget
ros2 topic hz /behavior_tree/tick
ros2 topic delay /ball/position

You should see:

  • BT tick rate ≥ 30Hz
  • Ball position delay < 40ms
  • Zero dropped goals due to planner oscillation in replay logs

What You Learned

  • Kalman filtering keeps perception alive through occlusion — don't publish only on detection
  • Behavior Trees beat state machines in competition because real matches generate unexpected states
  • Velocity obstacle planning prevents replanning latency from killing match performance
  • Team blackboard at 10Hz is fast enough for role assignment without instability

Limitations:

  • Velocity obstacle math assumes circular robots — adjust for humanoid leagues
  • BT tick rate depends on ROS2 executor configuration; use MultiThreadedExecutor for parallel subtrees
  • Shared blackboard requires reliable DDS — test on match WiFi conditions, not just LAN

When NOT to use this:

  • @Home and Industrial leagues prioritize manipulation and speech — behavior priority ordering changes significantly
  • Simulation League (2D) uses a completely different socket protocol — this stack doesn't apply there

Tested on ROS2 Humble + Iron, BehaviorTree.CPP v4, Ubuntu 22.04 LTS, Nav2 1.2.x

For league-specific rule changes in 2026, always cross-check against the official RoboCup Technical Committee releases at robocup.org.