RPI 5, Ubuntu Desktop 25.10, HailoRT Driver 4.23, RTSP camera

Just documenting what I have done.

I have a RPI 5, running Ubuntu Desktop 25.10, I probably could have used (Ubuntu Server) now with a custom Frigate container running HailoRT Driver 4.23.

I earlier had compiled HailoRT Driver 4.23.0 https://github.com/hailo-ai/hailort-drivers (v5.1.1) and HailoRT https://github.com/hailo-ai/hailort (v5.1.1) (hailortcli, libhailort.so.4.23.0, Python Binding, gstreamer) from source. In each instance I did a git switch Hailo8 as I have the Hailo8 HW. I verified the HailoRT Driver and HailoRT build (best I could) working on my Rpi 5, with Ubuntu 25.10, using a few of the examples provided.

I tried mounting these compiled 4.23.0 drivers, .so lib, and binary unto the latest released Frigate image in a Docker container. No joy, I ran into the Driver Mismatch error.

I then decided to clone, the Frigate source code, https://github.com/blakeblackshear/frigate (v18?) and tried to determine how the HailoRT Drivers were being loaded. In reviewing the Dockerfile in ../frigate/docker/main, I found that the Hailo Drivers where being downloaded using a simple shell script install_hailort.sh and the driver version selected was controlled by a hailo_version variable. I tracked the URL being dl in the shell script, and in looking at Frigate Github, I noticed that HailoRT Driver 4.23.0 had just be uploaded four (4) days prior. It was a simple change to hailo_version=4.23.0 to specify the Hailo Driver I wanted to use for my Hailo Device.

A quick Grok prompt to determine custom Frigate build steps:

  1. cd frigate
  2. make local (default target)

That was it, so I started the build. Note:, as I had previously compiled the HailoRT Drivers and HailoRT binaries, following the build from source instructions on Hailo site, I had necessary build tools installed. As it were Frigate build was uneventful for me. I don’t know if I needed any build tools installed locally, as Frigate make local may be self contained and downloaded everything it needed to build.

The build took ~ 2449.8s / 41m on my Rpi 5 8mb, with Hailo8 attached. I had no expectation on how long it would take

make local
echo ‘VERSION = “0.17.0-b5d2f86a”’ > frigate/version.py
echo ‘VITE_GIT_COMMIT_HASH=b5d2f86a’ > web/.env

After the Frigate custom build, the custom image was loaded and available local to my Docker installation.

My Frigate container is up and running and I have uploaded pictures to support detection, and it is training itself. So it is a win for me, it definitely needs to train itself, plus there may be additional configuration and optimization needed. But again it is a win for me just to get to this point. Below are my configuration file, in case they help anyone.

As always have fun.

My Frigate compose file:

Standalone Frigate – run ONLY with docker compose on picam (the host with Hailo-8)

Do NOT use docker stack deploy (Swarm mode blocks proper device passthrough)

services:
frigate:
image: frigate:latest # ← Replace with your local custom image tag if different (e.g. frigate-custom:hailo423)
container_name: frigate
privileged: true # Required for reliable Hailo-8 PCIe access on my Pi 5 (per Grok)
shm_size: 512mb
ports:
- “5000:5000” # Web UI – accessible from LAN
- “8554:8554” # RTSP restream
- “8555:8555/tcp” # WebRTC
- “8555:8555/udp” # WebRTC
volumes:
- /etc/localtime:/etc/localtime:ro
- /opt/frigate/config:/config
- /media/frigate:/media/frigate
- /opt/frigate/models:/config/models
- /tmp/cache:/tmp/cache
environment:
FRIGATE_MQTT_HOST: ${MQTT_HOST}
FRIGATE_MQTT_USER: ${MQTT_USER}
FRIGATE_MQTT_PASSWORD: ${MQTT_PASSWORD}
FRIGATE_RTSP_USER: ${RTSP_USER}
FRIGATE_RTSP_PASSWORD: ${RTSP_PASSWORD}
TZ: America/Los_Angeles
devices:
- /dev/hailo0:/dev/hailo0 # Proper Hailo-8 passthrough
restart: unless-stopped
healthcheck:
test: [“CMD”, “curl”, “-f”, “http://localhost:5000/api/version”]
interval: 60s
timeout: 10s
retries: 3

My Frigate config.yaml file (Redacted):

cat config/config.yaml
mqtt:
enabled: true
host:
port: 1883
topic_prefix: frigate
client_id: frigate
user:
password:
stats_interval: 60
tls:
enabled: false
auth:
reset_admin_password: false
enabled: true
failed_login_rate_limit: 1/second;5/minute;20/hour
trusted_proxies:
# This is subnet for the internal docker compose network, May not be needed
- 172.18.0.0/16
# Local LAN
- 192.168.1.0/24
database:
path: /config/frigate.db
go2rtc:
streams:
frontcam: # Matches camera name for best live view (MSE/WebRTC)
- rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@:554/cam/realmonitor?channel=1&subtype=0
- ffmpeg:rtsp_cam#audio=opus
ffmpeg:
hwaccel_args: preset-rpi-64-h264 # Global for Pi5 decoding
detectors:
hailo:
type: hailo8l
device: PCIe
record:
enabled: true
alerts:
retain:
days: 15
detections:
retain:
days: 15
continuous:
days: 0
motion:
days: 7
snapshots:
enabled: true
timestamp: false
crop: false
height: 750
retain:
default: 15
cameras:
frontcam:
enabled: true
detect:
enabled: true # Enabled to run Hailo detection
width: 2688
height: 1520
fps: 5
ffmpeg:
output_args:
record: preset-record-generic-audio-copy
inputs:
- path:
rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@:554/cam/realmonitor?channel=1&subtype=0
input_args: preset-rtsp-restream
roles:
- detect
- record
live:
streams:
frontcam: frontcam
objects:
track:
- person
- dog
- bicycle
- car
- motorcycle
motion:
mask: #Your Mask will be different or you may not have any set, these are custom
- 0.786,0.333,0.862,0.038,0.984,0.067,0.949,0.545
- 0.908,0.75,0.977,0.828,0.988,0.643,0.93,0.597
zones: #Your zones will be different or you may not have any set, these are custom
Driveway:
coordinates: 0.069,0.973,0.934,0.97,0.721,0.258,0.29,0.324
loitering_time: 0
Frontyard:
coordinates: 0.129,0.373,0.26,0.392,0.048,0.984,0.003,0.971,0.005,0.51
loitering_time: 0
CulDeSac:
coordinates:
0.291,0.311,0.743,0.242,0.799,0.219,0.802,0.174,0.711,0.083,0.739,0.046,0.
loitering_time: 0
Sidewalk:
coordinates:
0.656,0.037,0.537,0.047,0.392,0.097,0.307,0.102,0.207,0.184,0.164,
loitering_time: 0
review:
alerts:
required_zones:
- CulDeSac
- Frontyard
- Driveway
face_recognition:
enabled: true # Default but make it explicit

Optional tuning (usually not needed):

min_area: 5000 # pixels - ignore very small faces

detection_threshold: 0.7 # confidence threshold

model_size: small # or ‘large’ - small is faster, large more accurate

detect:
enabled: true
version: 0.17-0

Greetings,
Appreciate you sharing all this, is there any possible way I could trouble you to perhaps create a github repo containing them? Or even utilize the in preformatted text style within the post for the files? I’d be guessing all the indents elsewise.

Thanks in advance and no worries if not.

X