I trained yolov8.pt with my data and convert it to onnx.
I tried convert it hef but I got an error like this.
error:
hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8s/conv63 due to unsupported required slope. Desired shift is 19.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training
As the error message suggest first check whether your model was trained with batch-normalization. This is a general recommendation when training models that you want to run on any hardware that uses quantized data.
Second make sure you have a trained model with real data and not random weights and also use real images for the calibration set. Also make sure you trained the network with enough epochs.
Third, check that you normalized the calibration set.
The reason is our hardware uses quantized data and weights. So we calculate with fewer bits and still get as close as possible to the same result as if you would use floating point. Quantization is a lot easier if the data has a limited range, which is normally the case with real data. With random data that is often not the case.
Did you see/use the retraining docker from our Model Zoo?
My calibration images are test dataset that part of train dataset.
It didn’t occur any errors when I convert sample yolov8s.onnx model with the calibration images.
I followed that repo [GitHub - Hailo Model Zoo - Training - Yolov8]
As you told me, I got a mistake when I trained my model.
but I don’t know which part of the process was wrong
Can you explain about batch normalization more details?