Problem With Model Optimization

@Nadav I’m using DFC with linux and below commands to convert onnx to hef and getting the same issue on yolov8n. can you please tell me where can I add those quantization_params?

  1. hailo parser onnx yolov8n.onnx --net-name yolov8n --har-path yolov8n.har --start-node-names images --hw-arch hailo8l

  2. hailo optimize yolov8n.har --hw-arch hailo8l --use-random-calib-set calib_set.npy --output-har-path yolov8n_quantized_model.har

  3. hailo compiler yolov8n_quantized_model.har --hw-arch hailo8l --output-dir .

Hi @Vivek_Malvi,
You’re using --use-random-calib-set, this will always give you wrong detections. Please use real images, and the correct pre-processing code.

i tried that too. I gave around 1000 images in calib set but that didn’t affected much

Are you using a trained ONNX? Are the native results looks good?

Yes, @Nadav. I’m using a trained ONNX. the ONNX native results are good. there are no issues with it. The issue arises after hef conversion.