Stream inferencd

Hello,

Just got my hands on a RPI5 8Gb with the Hailo8, and I was wodering if is possible to do object detection using a rtmp stream as a video source?

I adapted the detection script, with the command from GStream documentation:

gst-launch-1.0 rtmpsrc location=rtmp://input.rtmp.server/live/livestream 

But at max what I’m able to get is:

HailoNet Error: gst_pad_push failed with status = -4

Setting up the environment...
Setting up the environment for hailo-tappas-core...
TAPPAS_VERSION is 3.31.0. Proceeding...
You are in the venv_hailo_rpi5_examples virtual environment.
TAPPAS_POST_PROC_DIR set to /usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes
DEVICE_ARCHITECTURE is set to: HAILO8
Running yolov8s
gst-launch-1.0 rtmpsrc location=rtmp://input.rtmp.server/live/livestream ! flvdemux name=demux demux.video ! decodebin ! queue max-size-buffers=20 max-size-bytes=0 max-size-time=0 ! videoscale ! queue max-size-buffers=5 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 ! queue max-size-buffers=5 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/home/pi/hailo-rpi5-examples/resources/yolov8s.hef batch-size=1 output-format-type=HAILO_FORMAT_TYPE_FLOAT32 nms-score-threshold=0.3 nms-iou-threshold=0.45 output-format-type=HAILO_FORMAT_TYPE_FLOAT32 ! queue max-size-buffers=5 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov8s so-path=/home/pi/hailo-rpi5-examples/venv_hailo_rpi5_examples/lib/python3.11/site-packages/resources/libyolo_hailortpp_postprocess.so qos=false ! queue max-size-buffers=5 max-size-bytes=0 max-size-time=0 ! hailooverlay ! queue max-size-buffers=5 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 ! fpsdisplaysink video-sink=ximagesink name=hailo_display sync=false text-overlay=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Redistribute latency...
Redistribute latency...
Redistribute latency...
0:00:01.115150380 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
0:00:01.115295435 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
0:00:01.115406806 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
0:00:01.115523862 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
0:00:01.115635862 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
0:00:01.115747066 11191   0x55a7ef1de0 WARN           basetransform gstbasetransform.c:1371:gst_base_transform_setcaps:<videoconvert1> transform could not transform video/x-raw, width=(int)640, height=(int)640, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)16/9, framerate=(fraction)30/1, format=(string)RGB, colorimetry=(string)1:1:0:0 in anything we support
HailoNet Error: gst_pad_push failed with status = -4
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Setting pipeline to NULL ...

Can someone help me a bit, I’m completely new, just learnes what a pipeline is 5 minutes ago :sweat_smile:

I recommend to divide the problem into smaller parts and solve them one by one. Remove the Hailo related elements and get a minimal RTSP pipeline running first. ChatGPT is a good tool to help you with generic GStreamer questions.

Probably yes, however the number of video streams / FPS will be limited compare to a more powerful CPU.

Deleted the message by mistake…

Thanks for your answer, but chatgpt is not the ‘panacea’ it helps up to a point, but after some questions it starts to repeat the same pipelines over and over again.

I’ve asked to chatgpt, claude, deepseek, mistral, manus, and after several days trying all the AI’s I know I decided to stops wasting time and ask to people, to receive as answer ask to the AI…

this is sad

Managed to ‘see’ the stream but I’m unable to ‘filter’ the inference

I managed to ‘see’ with this command:

gst-launch-1.0     rtmpsrc location="rtmp://input.rtmp.server/live/livestream" !     flvdemux ! decodebin ! videoconvert ! videoscale !     video/x-raw,format=RGB,width=640,height=640 !     synchailonet hef-path=./../resources/yolov8s.hef !     hailofilter so-path=../venv_hailo_rpi5_examples/lib/python3.11/site-packages/resources/libyolo_hailortpp_postprocess.so config-path=config.json !     hailooverlay !     videoconvert ! autovideosink

In the config file I wrote this to make it only detect ‘person’:

{
  "detection_threshold": 0.5,
  "max_boxes": 200,
  "labels": [
    "person"
  ],
  "filter_classes": [0]
}

But it keeps drawing detection boxes in other objects it detects, not only at persons, what i’m doing wrong?

OKOKOK

I was able to do (more or less) what I was trying:

Receive a rtmp stream do some inference on it and send the video back with the inference to another rtmp server

 gst-launch-1.0   rtmpsrc location="rtmp://input.rtmp.server/live/livestream" !   flvdemux ! decodebin !   queue ! videoconvert ! queue !   videoscale ! video/x-raw,format=RGB,width=640,height=640 ! queue !   synchailonet hef-path=./../resources/yolov8s.hef ! queue !   hailofilter       so-path=../venv_hailo_rpi5_examples/lib/python3.11/site-packages/resources/libyolo_hailortpp_postprocess.so       function-name=yolov8s       config-path=./config.json !   queue ! hailooverlay ! queue ! videoscale ! video/x-raw,format=RGB,width=1920,height=1080 !   videoconvert ! x264enc tune=zerolatency bitrate=30000 speed-preset=superfast !   flvmux streamable=true name=mux !   rtmpsink location="rtmp://output.rtmp.server/live/livestream"

I would like to use something like object_detection.py from Hailo-Application-Code-Examples or detection.py from Hailo Rpi5 examples

But I’m unable to make it work, I changed toolbox to ‘open’ the rtmp stream and to resize it to 640x640 before the inference, added a few debug lines to know if i made the connection, what is the received resolution etc etc

2025-06-26 23:44:12.425 | INFO     | common.toolbox:init_input_source:115 - Attempting to open RTMP stream: rtmp://input.rtmp.server/live/livestream
2025-06-26 23:44:23.098 | INFO     | common.toolbox:init_input_source:121 - RTMP stream opened successfully.
2025-06-26 23:44:23.286 | INFO     | common.toolbox:visualize:462 - FFmpeg started for RTMP streaming to rtmp://output.rtmp.server/live/livestream at 1920x1080.
2025-06-26 23:44:23.288 | DEBUG    | common.toolbox:preprocess_from_cap:338 - Raw frame 0 read: Shape=(1080, 1920, 3), Dtype=uint8, Size=6220800
2025-06-26 23:44:23.301 | DEBUG    | common.toolbox:preprocess_from_cap:358 - Preprocessed frame 1 details: Shape=(640, 640, 3), Dtype=uint8, Size=1228800
2025-06-26 23:44:23.302 | DEBUG    | common.hailo_inference:run:197 - Frame 0 in batch ready for Hailo inference. Shape=(640, 640, 3), Dtype=uint8, Size=1228800
2025-06-26 23:44:23.362 | DEBUG    | common.toolbox:preprocess_from_cap:338 - Raw frame 1 read: Shape=(1080, 1920, 3), Dtype=uint8, Size=6220800
2025-06-26 23:44:23.373 | DEBUG    | common.toolbox:preprocess_from_cap:358 - Preprocessed frame 2 details: Shape=(640, 640, 3), Dtype=uint8, Size=1228800
2025-06-26 23:44:23.380 | DEBUG    | common.toolbox:preprocess_from_cap:338 - Raw frame 2 read: Shape=(1080, 1920, 3), Dtype=uint8, Size=6220800

[HailoRT] [error] CHECK failed - Trying to get buffer as view for 'yolov8s/yolov8_nms_postprocess', while it is not configured as view
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/hailo_platform/pyhailort/pyhailort.py", line 3286, in run_async
    cpp_job = self._configured_infer_model.run_async(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
hailo_platform.pyhailort._pyhailort.HailoRTStatusException: 6

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/pi/hailo-rpi5-examples/ejemplos/manus/rtmp_test.py", line 174, in <module>
    main()
  File "/home/pi/hailo-rpi5-examples/ejemplos/manus/rtmp_test.py", line 168, in main
    infer(args.net, args.input, args.output_rtmp_url, args.batch_size, args.labels,
  File "/home/pi/hailo-rpi5-examples/ejemplos/manus/rtmp_test.py", line 118, in infer
    hailo_inference.run()
  File "/home/pi/hailo-rpi5-examples/ejemplos/manus/common/hailo_inference.py", line 211, in run
    job = configured_infer_model.run_async(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/hailo_platform/pyhailort/pyhailort.py", line 3285, in run_async
    with ExceptionWrapper():
  File "/usr/lib/python3/dist-packages/hailo_platform/pyhailort/pyhailort.py", line 118, in __exit__
    self._raise_indicative_status_exception(value)
  File "/usr/lib/python3/dist-packages/hailo_platform/pyhailort/pyhailort.py", line 166, in _raise_indicative_status_exception
    raise self.create_exception_from_status(error_code) from libhailort_exception
hailo_platform.pyhailort.pyhailort.HailoRTInvalidOperationException: Invalid operation. See hailort.log for more information


Does anyone knows what means CHECK failed - Trying to get buffer as view for 'yolov8s/yolov8_nms_postprocess', while it is not configured as view Traceback (most recent call last):