Inference Performance Issue of Hailo-8L on RPi5

Why is it that:

  • When I use the method from Hailo-Application-Code-Examples on GitHub (specifically, the python/streaming/yolox_stream_inference.py code) for inference, the performance is approximately FPS: 30 and CPU usage: 60~70%.

  • Whereas, when I use the code from hailo-rpi5-examples on GitHub (specifically, basic_pipelines/detection.py), the performance is FPS: 60 and CPU usage: 20~30%.

Is the significant difference caused by the Python API not being well-optimized?

I used yolov8n model.

1 Like

Both codes are tested using the same model.

Hi @joy.yen,
The Hailo-Application-Code-Examples repo is generic, so while compatible to the Pi, it wasn’t specifically designed for it. With the hailo-rpi5-examples, it’s the opposite way around. It is designed to run on the Pi, while probably also good on other platforms.

1 Like

In addition to what @Nadav said, please note that in hailo-rpi5-examples the post processing is added to the HEF and also it uses gstreamer which is running in C/CPP for the pipeline.
The Hailo-Application-Code-Examples example uses python with tensorflow for the post process thus utilizing more CPU usage.

1 Like

First of all, thank you for your response.

I have another question:
Is it possible to modify the Hailo-Application-Code-Examples so that the post processing is executed on the device instead of on the host? Are there any relevant examples for this? Or is it already integrated into the Python API?

Thank you for taking the time to respond.

Hi, please not that i’ve edited my previous response.
Regarding performing PP on the HEF:

1 Like