When I use the method from Hailo-Application-Code-Examples on GitHub (specifically, the python/streaming/yolox_stream_inference.py code) for inference, the performance is approximately FPS: 30 and CPU usage: 60~70%.
Whereas, when I use the code from hailo-rpi5-examples on GitHub (specifically, basic_pipelines/detection.py), the performance is FPS: 60 and CPU usage: 20~30%.
Is the significant difference caused by the Python API not being well-optimized?
Hi @joy.yen,
The Hailo-Application-Code-Examples repo is generic, so while compatible to the Pi, it wasn’t specifically designed for it. With the hailo-rpi5-examples, it’s the opposite way around. It is designed to run on the Pi, while probably also good on other platforms.
In addition to what @Nadav said, please note that in hailo-rpi5-examples the post processing is added to the HEF and also it uses gstreamer which is running in C/CPP for the pipeline.
The Hailo-Application-Code-Examples example uses python with tensorflow for the post process thus utilizing more CPU usage.
I have another question:
Is it possible to modify the Hailo-Application-Code-Examples so that the post processing is executed on the device instead of on the host? Are there any relevant examples for this? Or is it already integrated into the Python API?