Object detection rpi5+aihat(26tops) from rtsp stream with hw accel

Hello! I need some help building a script that uses a h265 rtsp camera stream and does object detection.
I need the stream hardware accelerated decoded and feed into the detection pipeline ( I understood that for now, you can only do this with ffmpeg, as the system on rpi5 uses an older, stable version of GStreamer (1.22) that lacks features from the latest (1.26).)
And i think you need a bridge between ffmpeg and detection, like :
1.RTSP Stream → ffmpeg (decodes video) → Linux Kernel (/dev/video10) → OpenCV (reads from webcam);
2.RTSP Stream → ffmpeg (decodes and writes to stdout) → Python Subprocess Pipe → NumPy (reshapes data into an image)
I cant get this to work.
I have my own compiled model.
Shape: (640, 640, 3)
NHWC
UINT8
Thanks for any help/suggestion!

Hello,

I’m doing that, getting from one rtmp and send the inference to other rtmp.

you only have to change input.server and output.server to your needs:

rtmpsrc location=input.server ! flvdemux ! queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=yolov8s !queue ! hailooverlay ! queue ! videoconvert ! x264enc tune=zerolatency bitrate=1000 speed-preset=superfast ! flvmux streamable=true name=mux ! rtmpsink location=“output.server”

Thank you for your answer, could you tell me where exactly do I need to change the input/output.server?
And what script to run?

yes in the begining of the command where is the rtmpsrc location=input.serveryou change input.server for your rtmp url of the camera like rtmpsrc location=rtmp://ip_of_camera/channel/1 and at the end you have the output part of the pipeline, I have a rtmpsink location= output.server because I’m sending the output inference to a streaming server, if you want to see it in a window on your screen chage it for fpsdisplaysink video-sink=autovideosinkinstead

Yes, I understood where i have to change the location=rtsp://192.168…
But in what file or what script or where?

script¿? any one you want, that was the pipeline part, simply puut that in the script and run it :wink:

if you want to run directly on the terminal (without having to run any script) you simply write gst-launch-1.0 in front of the pipeline and boom! you have it working!!!

like if you want to run with ‘default video’ and see it on the screen like in the demo you must run in the terminal something like this:

gst-launch-1.0 filesrc location=“/usr/local/hailo/resources/videos/example.mp4” ! queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=filter ! queue ! ! hailooverlay ! queue ! videoconvert ! fpsdisplaysink video-sink=autovideosink

To receive the source from the camera stream (chage the source RTMP URL for yours in the begining of the command) and see it on the screen:

gst-launch-1.0 rtmpsink location=rtmp://camera.ip/channel/name ! flvdemux ! queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=filter ! queue ! hailooverlay ! queue ! videoconvert ! fpsdisplaysink video-sink=autovideosink

To receive the stream of the rtmp camera stream and send it to another rtmp stream server like OBS, Twich, etc

gst-launch-1.0 rtmpsink location=rtmp://camera.ip/channel/name ! flvdemux ! queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=filter ! queue ! hailooverlay ! queue ! videoconvert ! x264enc tune=zerolatency bitrate=1000 speed-preset=superfast ! flvmux streamable=true name=mux ! rtmpsink location=rtmp://output.server/channel/name

Sorry this has a typo the correct one is:

gst-launch-1.0 filesrc location=“/usr/local/hailo/resources/videos/example.mp4” !  queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=filter ! queue ! hailooverlay ! queue ! videoconvert ! fpsdisplaysink video-sink=autovideosink

I extracted this from the demo script detection.py from hailo-rpi5-examples, what I didn’t found yet is how to make personalized inferences in a useful way….

Yes you can edit the callback in the script to detect only the labels you want, as in the example with ‘person’ and if you run the script with the parameter ‘–use-frame’ a second window is opened showing only with the boxes you configured in the callback, but I don’t know how to use that image I see in the second window, because no matter what I write in the callback, the output sink always is the ‘full inference’ not the one the callback draws…..

Ok i made it work using the gstreamer command and my own model for person detection.
But this works if i switch my camera to h.264. And I assume, since using gstreamer,this is not hw accelerated.
And the final script should be able to detect the person→save a photo of the detection →increasing a counter.

yes, if you look the rtmp version it is h264 converted prior to send it t the other rtmp stream

what do you mean hardware accelerated? All the scripts I shaw in the hailo repo use gst
if you want to know how hailo NPU is doing you can run before the command (or the script) export HAILO_MONITOR=1 and open another window to run hailortcli monitor you will see the hardware use of the NPU

About the counter, if you’re using yolo based model it shoud be compatible with the postprocesing .so file used so yo can add hailotracker before the hailooverlay like this one

gst-launch-1.0 filesrc location=“/usr/local/hailo/resources/videos/example.mp4” ! queue ! decodebin ! queue ! videoconvert ! queue ! videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue ! synchailonet hef-path=/home/pi/hailo_rpi_programs/resources/yolov8s.hef ! queue ! hailofilter so-path=/opt/resources/libyolo_hailortpp_postprocess.so function-name=filter ! queue ! hailotracker name=hailo_tracker class-id=1 kalman-dist-thr=0.8 iou-thr=0.9 init-iou-thr=0.7 keep-new-frames=2 keep-tracked-frames=15 keep-lost-frames=2 keep-past-metadata=False qos=False ! queue name=hailo_tracker_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailooverlay ! queue ! videoconvert ! fpsdisplaysink video-sink=autovideosink