I am encountering an issue while running inference on a Raspberry Pi with Hailo-8L. I am using a YOLOv8n-based model from Hailo Application Code Examples.
Problem Description
I trained a model using YOLOv8n and converted it to ONNX format. The precision remains high after the ONNX conversion. However, after converting the ONNX model to HEF using the Hailo AI Software Suite (inside a Docker environment) and running inference on Hailo-8L, the precision drops significantly, and the object detection results become incorrect.
I am unsure what is causing this issue. Could it be due to a problem during the model conversion process?
Conversion Steps
There is my conversion steps:
- Optimize Model
- No errors were reported during optimization.
- Command Line:
hailomz optimize --hw-arch hailo8l --har /local/workspace/yolov8s.har yolov8s --calib-path /local/shared_with_docker/images/train --classes 8
- Compile Model
- Some errors occurred during compilation. Could these errors be affecting model precision?
- Command Line:
hailomz compile yolov8s --hw-arch hailo8l --har /local/workspace/yolov8s.har --classes 8
Has anyone encountered a similar issue? Any suggestions on troubleshooting or improving the conversion process?
Thanks in advance for your help!