Set Up Intel RealSense SDK with Python 3.14 in 12 Minutes

Get Intel RealSense depth cameras working with Python 3.14 using pip or build from source - includes macOS ARM64 support.

Problem: Python 3.14 Not Officially Supported Yet

You upgraded to Python 3.14 and pip install pyrealsense2 fails with "no matching distribution found" - even though the SDK supports up to Python 3.13 officially.

You'll learn:

  • How to install pyrealsense2 on Python 3.14 (macOS ARM64 ready)
  • When to use pip vs building from source
  • How to verify your RealSense camera works

Time: 12 min | Level: Intermediate


Why This Happens

The official RealSense SDK supports Python 3.9-3.13 as of SDK v2.56.5. Python 3.14 wheels exist for macOS ARM64 but not yet for Windows/Linux on the main PyPI package.

Common symptoms:

  • ERROR: No matching distribution found for pyrealsense2
  • Camera works in RealSense Viewer but not in Python scripts
  • Import succeeds but crashes on pipeline initialization

Solution

Step 1: Check Your Platform

# Check Python version and platform
python3 --version
python3 -c "import platform; print(platform.machine())"

Expected: Shows Python 3.14.x and your architecture (arm64, x86_64, etc.)


Step 2: Install Based on Platform

# Use the community-maintained package with Python 3.14 support
pip install pyrealsense2-macosx

Why this works: The pyrealsense2-macosx package has Python 3.14 wheels (cp314) available as of February 2026.


Windows/Linux (Build Required)

For non-macOS platforms, you'll need to build from source:

# Install build dependencies
sudo apt-get update
sudo apt-get install git cmake build-essential python3-dev

# Clone the SDK
git clone https://github.com/realsenseai/librealsense.git
cd librealsense

# Build with Python bindings
mkdir build && cd build
cmake .. -DBUILD_PYTHON_BINDINGS:bool=true \
         -DPYTHON_EXECUTABLE=$(which python3)
         
make -j$(nproc)
sudo make install

Expected: Build completes in 8-12 minutes depending on your CPU.

If it fails:

  • Error: "CMake 3.10 or higher required": Run pip install --upgrade cmake
  • Error: "Python.h not found": Install python3.14-dev package
  • Missing USB permissions: Add yourself to dialout group: sudo usermod -a -G dialout $USER (requires logout)

Step 3: Verify Installation

# test_realsense.py
import pyrealsense2 as rs

# Create a pipeline
pipeline = rs.pipeline()

# Start streaming with default config
profile = pipeline.start()

try:
    # Wait for 5 frames to stabilize
    for i in range(5):
        frames = pipeline.wait_for_frames()
    
    # Get a single depth frame
    depth_frame = frames.get_depth_frame()
    
    if depth_frame:
        # Get dimensions
        width = depth_frame.get_width()
        height = depth_frame.get_height()
        
        # Query distance at center pixel
        center_distance = depth_frame.get_distance(width // 2, height // 2)
        
        print(f"✓ Camera initialized: {width}x{height}")
        print(f"✓ Center distance: {center_distance:.2f} meters")
    else:
        print("✗ No depth frame received")
        
finally:
    pipeline.stop()

Run it:

python3 test_realsense.py

You should see:

✓ Camera initialized: 640x480
✓ Center distance: 1.23 meters

Step 4: Common First Use Cases

Capture and Save Depth Image

import pyrealsense2 as rs
import numpy as np

pipeline = rs.pipeline()
config = rs.config()

# Configure streams
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

pipeline.start(config)

try:
    frames = pipeline.wait_for_frames()
    depth_frame = frames.get_depth_frame()
    color_frame = frames.get_color_frame()
    
    # Convert to numpy arrays
    depth_image = np.asanyarray(depth_frame.get_data())
    color_image = np.asanyarray(color_frame.get_data())
    
    # Save with OpenCV
    import cv2
    cv2.imwrite('depth.png', depth_image)
    cv2.imwrite('color.png', color_image)
    
    print(f"✓ Saved frames: depth.png, color.png")
    
finally:
    pipeline.stop()

Why this approach: Uses numpy arrays for easy integration with OpenCV, PIL, or ML frameworks.


Align Depth to Color Stream

import pyrealsense2 as rs
import numpy as np

pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Create alignment primitive to align depth to color
align_to = rs.stream.color
align = rs.align(align_to)

pipeline.start(config)

try:
    frames = pipeline.wait_for_frames()
    
    # Align frames
    aligned_frames = align.process(frames)
    aligned_depth = aligned_frames.get_depth_frame()
    color = aligned_frames.get_color_frame()
    
    # Now depth and color have same dimensions and are aligned
    depth_array = np.asanyarray(aligned_depth.get_data())
    color_array = np.asanyarray(color.get_data())
    
    print(f"✓ Aligned depth shape: {depth_array.shape}")
    print(f"✓ Color shape: {color_array.shape}")
    
finally:
    pipeline.stop()

Use case: Required for pixel-perfect depth overlay on RGB images for segmentation or object detection.


Verification

Test camera enumeration:

import pyrealsense2 as rs

ctx = rs.context()
devices = ctx.query_devices()

print(f"Found {len(devices)} RealSense device(s):")
for dev in devices:
    print(f"  - {dev.get_info(rs.camera_info.name)}")
    print(f"    Serial: {dev.get_info(rs.camera_info.serial_number)}")
    print(f"    Firmware: {dev.get_info(rs.camera_info.firmware_version)}")

You should see: Your camera model (D435, D455, etc.) with serial number and firmware version.


What You Learned

  • Python 3.14 support exists for macOS ARM64 via pyrealsense2-macosx
  • Windows/Linux requires building from source until official wheels arrive
  • Use rs.align() to match depth and color frame dimensions
  • The SDK provides numpy-compatible arrays for easy CV integration

Limitations:

  • Official Python 3.14 wheels for Windows/Linux not available yet (as of Feb 2026)
  • Building from source takes 10+ minutes and requires 2GB+ free space
  • Some older RealSense models (SR300, R200) have limited Python support

Troubleshooting

"No device connected"

# Linux: Check USB permissions
lsusb | grep Intel
sudo chmod 666 /dev/bus/usb/XXX/YYY  # Replace XXX/YYY with your device

# Or add udev rules permanently
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules

"Unable to find module"

# Verify installation path
python3 -c "import pyrealsense2; print(pyrealsense2.__file__)"

# If empty, check PYTHONPATH
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.14/site-packages

Firmware Updates

# Check firmware version
rs-fw-update -l

# Update if needed (requires RealSense SDK tools)
rs-fw-update -f firmware.bin

Download latest firmware from: https://dev.intelrealsense.com/docs/firmware-releases


SDK Version Compatibility

Python Versionpyrealsense2pyrealsense2-macosxSDK Version
3.14❌ (build)✅ (pip)2.56.5
3.132.56.5
3.122.55.2+
3.112.53.0+
3.102.50.0+

Note: SDK v2.56.5 officially supports Python 3.9-3.13. Python 3.14 support is available via community packages or building from source.


Performance Tips

1. Use hardware-accelerated processing:

# Enable post-processing filters
decimation = rs.decimation_filter()
spatial = rs.spatial_filter()
temporal = rs.temporal_filter()

depth_frame = temporal.process(
    spatial.process(
        decimation.process(depth_frame)
    )
)

2. Control framerate for CPU efficiency:

# 30 FPS is default, lower for battery savings
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 15)

3. Disable auto-exposure in controlled lighting:

device = pipeline.get_active_profile().get_device()
depth_sensor = device.first_depth_sensor()

# Set fixed exposure (microseconds)
depth_sensor.set_option(rs.option.exposure, 8500)
depth_sensor.set_option(rs.option.enable_auto_exposure, 0)

Tested on Python 3.14.0, SDK 2.56.5, macOS 15 ARM64, Ubuntu 24.04, RealSense D435/D455