How to record hailo8 movie by using detection.py?

Hello i spent a few hours searching but I can’t find a command how just to record hailo8 film with working detection object frames, I use rpi,
than ou in advance for help

1 Like

Hello @kontakt,

If what you need is to save the output to a file, such as a .mp4, you can do it using several approaches.

I will share with you a possible solution by modifying the hailo_rpi_common.py file (GitHub Source Code of hailo_rpi_common.py). In this example, the functions you need to modify are as follows:

DISPLAY_PIPELINE Function

def DISPLAY_PIPELINE(video_sink='xvimagesink', sync='true', show_fps='false', name='hailo_display'):
    display_pipeline = (
        f'{QUEUE(name=f"{name}_hailooverlay_q")} ! '
        f'hailooverlay name={name}_hailooverlay ! '
        f'{QUEUE(name=f"{name}_videoconvert_q")} ! '
        f'videoconvert name={name}_videoconvert n-threads=2 qos=false ! '
        f'{QUEUE(name=f"{name}_tee_q")} ! '
        f'tee name={name}_tee '
        f'{name}_tee. ! {QUEUE(name=f"{name}_display_q")} ! '
        f'fpsdisplaysink name={name} video-sink={video_sink} sync={sync} text-overlay={show_fps} signal-fps-measurements=true '
        f'{name}_tee. ! {QUEUE(name=f"{name}_file_q")} ! '
        f'videoconvert ! x264enc tune=zerolatency ! '
        f'mp4mux streamable=true fragment-duration=1 ! '
        f'filesink location=output.mp4 '
    )
    return display_pipeline

shutdown Function

def shutdown(self, signum=None, frame=None):
    print("Shutting down... Hit Ctrl-C again to force quit.")
    signal.signal(signal.SIGINT, signal.SIG_DFL)
    
    # Send EOS (End of Stream) event to the pipeline
    self.pipeline.send_event(Gst.Event.new_eos())
    
    # Allow some time for the EOS event to propagate
    time.sleep(1)
    
    self.pipeline.set_state(Gst.State.PAUSED)
    GLib.usleep(100000)  # 0.1-second delay

    self.pipeline.set_state(Gst.State.READY)
    GLib.usleep(100000)  # 0.1-second delay

    self.pipeline.set_state(Gst.State.NULL)
    GLib.idle_add(self.loop.quit)

Explanation of the Changes

In the GStreamer pipeline:

  1. A tee element is used to split the output into two branches:

    • One branch for display using xvimagesink.
    • Another branch to create an .mp4 file using filesink.
  2. Additional elements such as videoconvert, x264enc, and mp4mux were included to ensure the output is correctly encoded and stored as an .mp4 file.

In addition, modifications have been made to the shutdown function to improve the closing of pipelines, in order to allow resources to be released correctly.


I hope you find this example useful. If you have more questions or need more help, don’t hesitate to ask!

Best regards,

Oscar Mendez
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.ai
Developers wiki: Hailo AI Platform Guide
Website: www.ridgerun.ai

2 Likes

hello @oscar.mendez,

I tried what you suggested with as an input a camera connected in csi.

It works fine but for a file such as a .mp4 in input, it doesnt. Do you have an idea of how it could ?

Hello @thomas38,

Have you made any modifications to the SOURCE_PIPELINE function or any other part of the code? If possible, could you share an example of how you execute the pipeline and the output you get in the console?

Best regards,

Oscar Mendez
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.ai
Developers wiki: Hailo AI Platform Guide
Website: www.ridgerun.ai

Hello @oscar.mendez,

I’ve implemented the exact changes you suggested. When I run:

python basic_pipelines/detection.py --input rpi

The live detection window from my Raspberry Pi camera is displayed correctly, and an output file is generated, which I can watch afterward. However, the performance is a bit laggy, running at approximately 10 FPS instead of 30.

When I run:

python basic_pipelines/detection.py --input resources/video/arc.mp4

The detection on the video is displayed (also laggy), and the output file is created successfully. I can open and play it with the detection overlay while the script is still running. However, at the end of execution, I encounter the following error and i cant open it anymore :

QoS message received from autovideosink0-actual-sink-xvimage
QoS message received from autovideosink0-actual-sink-xvimage
QoS message received from autovideosink0-actual-sink-xvimage
End-of-stream

(Hailo Detection App:19962): GStreamer-CRITICAL **: 17:13:43.751: gst_segment_do_seek: assertion 'segment->format == format' failed
Video rewound successfully. Restarting playback...

Do you have any insights on what might be causing this issue? Could it be related to the way GStreamer handles the video pipeline?

Thanks in advance for your help!

Hello @thomas38,

The error you’re experiencing could have several causes, such as timestamp desynchronization, an unexpected segment format, EOS before seek, etc. One possible solution is to run the command with the GST_DEBUG option enabled, to get a more detailed log that will help you pinpoint the issue more accurately.

Additionally, if you only need to play the video once and want the pipeline to automatically terminate when it’s finished, you can modify the on_eos function like this:

def on_eos(self):
    self.shutdown()

This way, the pipeline will shut down completely once the video ends, without needing to perform a rewind.

I hope you find it helpful. Let me know if you need more help.

Regards,

Oscar Mendez
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.ai
Developers wiki: Hailo AI Platform Guide
Website: www.ridgerun.ai

Hello @oscar.mendez,

indeed your suggestion by modifiying on_eos function work. Although, when the output video is processed, all the frames are not on the video. The output video is stuterring. Do you know what I can do to have a constant 10 fps for example ?