I am encountering an issue while running inference on a Raspberry Pi with Hailo-8L. I am using a YOLOv8n-based model from Hailo Application Code Examples.
Problem Description
I trained a model using YOLOv8n and converted it to ONNX format. The precision remains high after the ONNX conversion. However, after converting the ONNX model to HEF using the Hailo AI Software Suite (inside a Docker environment) and running inference on Hailo-8L, the precision drops significantly, and the object detection results become incorrect.
I am unsure what is causing this issue. Could it be due to a problem during the model conversion process?
First I would recommend to work trough the tutorials in the Hailo AI Software Suite especially the Layer Noise Analysis that will show you how to analyze accuracy issues.
It looks like you point to your training dataset for the calibration. More does not always mean better. Try a smaller set of images for the calibration e.g. 64. They should be representative of the dataset. So get a good mix.
For the more advanced optimization algorithms you will need more images.
This is expected behavior of the compilation process. The compiler tries to allocate your network onto the chip. This is a type of puzzle where some allocations do not work and the compiler will try other options.
Hi @Wang_Chuan
How did you determine that precision dropped a lot? Did you evaluate the mAP on a dataset and compared to original mAP? Or did you just qualitatively look at few images?
I used Hailo-Application-Code-Examples to inference the same validation dataset (1000 images) and compared the results with the original model. I noticed that many bounding boxes were missing, and there were also some misclassifications.
I suspect that the environments for training the YOLO model, converting it to ONNX, and converting ONNX to HEF are different. I will try running all these processes in the same environment to see if it solve the issue.