Precision Drop After Converting ONNX to HEF on Hailo-8L

I am encountering an issue while running inference on a Raspberry Pi with Hailo-8L. I am using a YOLOv8n-based model from Hailo Application Code Examples.

Problem Description
I trained a model using YOLOv8n and converted it to ONNX format. The precision remains high after the ONNX conversion. However, after converting the ONNX model to HEF using the Hailo AI Software Suite (inside a Docker environment) and running inference on Hailo-8L, the precision drops significantly, and the object detection results become incorrect.

I am unsure what is causing this issue. Could it be due to a problem during the model conversion process?

Conversion Steps
There is my conversion steps:

  1. Optimize Model
  • No errors were reported during optimization.
  • Command Line:
hailomz optimize --hw-arch hailo8l --har /local/workspace/yolov8s.har yolov8s --calib-path /local/shared_with_docker/images/train --classes 8
  1. Compile Model
  • Some errors occurred during compilation. Could these errors be affecting model precision?
  • Command Line:
hailomz compile yolov8s --hw-arch hailo8l --har /local/workspace/yolov8s.har --classes 8

Has anyone encountered a similar issue? Any suggestions on troubleshooting or improving the conversion process?

Thanks in advance for your help!

Welcome to the Hailo Community!

First I would recommend to work trough the tutorials in the Hailo AI Software Suite especially the Layer Noise Analysis that will show you how to analyze accuracy issues.

It looks like you point to your training dataset for the calibration. More does not always mean better. Try a smaller set of images for the calibration e.g. 64. They should be representative of the dataset. So get a good mix.
For the more advanced optimization algorithms you will need more images.

This is expected behavior of the compilation process. The compiler tries to allocate your network onto the chip. This is a type of puzzle where some allocations do not work and the compiler will try other options.

Hi @Wang_Chuan
How did you determine that precision dropped a lot? Did you evaluate the mAP on a dataset and compared to original mAP? Or did you just qualitatively look at few images?