Yolov8m and Yolov8L HEF compilation errors

Hi

I’m trying to convert yolov8m and yolov8l custom trained model to HEF but facing issues.

‘.pt’ to ‘.onnx’

from ultralytics import YOLO

Load the YOLO11 model

model = YOLO(“/home/sudhir/Documents/sudhir/runs/detect/jan9/weights/yolov8l_det.pt”)

Export the model to ONNX format

model.export(format=“onnx”, imgsz=640, opset=11)

Error:

[info] No shifts available for layer yolov8l/conv59/conv_op, using max shift instead. delta=0.2450
[info] No shifts available for layer yolov8l/conv57/conv_op, using max shift instead. delta=0.3101
[info] No shifts available for layer yolov8l/conv57/conv_op, using max shift instead. delta=0.1550
[info] No shifts available for layer yolov8l/conv56/conv_op, using max shift instead. delta=0.1112
[info] No shifts available for layer yolov8l/conv56/conv_op, using max shift instead. delta=0.0556
[info] No shifts available for layer yolov8l/conv58/conv_op, using max shift instead. delta=0.2432
[info] No shifts available for layer yolov8l/conv57/conv_op, using max shift instead. delta=0.1550
[info] No shifts available for layer yolov8l/conv56/conv_op, using max shift instead. delta=0.0556
[info] No shifts available for layer yolov8l/conv58/conv_op, using max shift instead. delta=0.1216
[info] No shifts available for layer yolov8l/conv56/conv_op, using max shift instead. delta=0.0556
[info] No shifts available for layer yolov8l/conv58/conv_op, using max shift instead. delta=0.1216
[info] No shifts available for layer yolov8l/conv29/conv_op, using max shift instead. delta=0.4032
[info] No shifts available for layer yolov8l/conv29/conv_op, using max shift instead. delta=0.2016
[info] No shifts available for layer yolov8l/conv43/conv_op, using max shift instead. delta=0.1203
[info] No shifts available for layer yolov8l/conv49/conv_op, using max shift instead. delta=0.5232
[info] No shifts available for layer yolov8l/conv49/conv_op, using max shift instead. delta=0.2616
[info] No shifts available for layer yolov8l/conv48/conv_op, using max shift instead. delta=0.5643
[info] No shifts available for layer yolov8l/conv48/conv_op, using max shift instead. delta=0.2821
[info] No shifts available for layer yolov8l/conv49/conv_op, using max shift instead. delta=0.2616
[info] No shifts available for layer yolov8l/conv56/conv_op, using max shift instead. delta=0.0556
[info] No shifts available for layer yolov8l/conv58/conv_op, using max shift instead. delta=0.1216
[info] No shifts available for layer yolov8l/conv57/conv_op, using max shift instead. delta=0.1550

hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8l/conv74 due to unsupported required slope. Desired shift is 9.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training.

Alls file

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
model_optimization_config(calibration, calibset_size=128)
performance_param(compiler_optimization_level=max)

post_quantization_optimization(finetune, policy=enabled, loss_layer_names = [conv97, conv82, conv67, conv25, conv100, conv88, conv73, conv103, conv89, conv74],

loss_types = [l2rel,l2rel,l2rel,l2rel,l2rel,l2rel,l2rel,ce,ce,ce],

loss_factors=[1,1,1,1,2,2,2,2,2,2], epochs = 4, batch_size=2)

model_optimization_flavor(compression_level=0)
change_output_activation(conv74, sigmoid)
change_output_activation(conv89, sigmoid)
change_output_activation(conv103, sigmoid)
nms_postprocess(“…/…/postprocess_config/yolov8l_nms_config.json”, meta_arch=yolov8, engine=cpu)

How can i resolve this

Hey @sudhir ,

To resolve the issue you’re encountering while converting your YOLOv8 model to a HEF file, here are some suggestions based on the errors you are facing:

1. Calibration with a Proper Dataset

  • Ensure your calibration dataset closely resembles the data the model will handle in deployment. This improves quantization accuracy and avoids imbalances in data or weights. Normalize the calibration set appropriately using:
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
  • Use the Hailo Dataflow Compiler’s tools for calibration to validate the dataset.

2. Adjust Quantization and Fine-Tuning

  • Use post_quantization_optimization with specific loss layer names and factors. Based on your script:
post_quantization_optimization(
    finetune, 
    policy=enabled, 
    loss_layer_names=[conv97, conv82, conv67, conv25, conv100, conv88, conv73, conv103, conv89, conv74],
    loss_types=[l2rel, l2rel, l2rel, l2rel, l2rel, l2rel, l2rel, ce, ce, ce],
    loss_factors=[1, 1, 1, 1, 2, 2, 2, 2, 2, 2],
    epochs=4, 
    batch_size=2
)
  • This will help the model adapt to the quantization process more effectively.

3. Verify Model Structure

  • Ensure your ONNX model contains only layers supported by the Hailo Dataflow Compiler. Unsupported operations or improperly defined layers may result in errors. Use the Hailo Model Zoo and Dataflow Compiler guides for validation and optimization​​.

4. Inspect Batch Normalization

  • Verify that batch normalization layers were properly trained and included in your YOLOv8 model. Improperly trained normalization layers can lead to activation range issues during quantization.

5. Compiler and Performance Settings

  • Use the following settings to enhance performance:
performance_param(compiler_optimization_level=max)
  • These settings optimize the model for the Hailo hardware during compilation.

6. Output Activation Adjustments

  • Adjust the output activation for specific layers as needed to ensure compatibility:
change_output_activation(conv74, sigmoid)
change_output_activation(conv89, sigmoid)
change_output_activation(conv103, sigmoid)