convert onnx to hef file

i have converted onnx to hef file but it still fail. if i convert using my previous onnx, it has no problem but the latest onnx that i used keeps failing. i am so annoyed with hailo at this point huaaaaa please help me.

(hailodfc) tia@LAPTOP-HGJ8C6MM:~$ hailomz compile yolov8s --ckpt=best_ir8.onnx --hw-arch hailo8l --calib-path
train/images --classes 2 --performance
Start run for network yolov8s …
Initializing the hailo8l runner…
[info] Translation started on ONNX model yolov8s
[info] Restored ONNX model yolov8s (completion time: 00:00:00.71)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.13)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8s/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8s (completion time: 00:00:03.02)
[info] Saved HAR to: /home/tia/yolov8s.har
Using generic alls script found in /home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8s.alls because there is no specific hardware alls
Preparing calibration data…
[info] Loading model script commands to yolov8s from /home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8s.alls
[info] Loading model script commands to yolov8s from string
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s less data than the recommended amount (1024), and there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] Starting Mixed Precision
[info] Mixed Precision is done (completion time is 00:00:00.90)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|███████████████████████████████████████████████████████| 64/64 [01:41<00:00, 1.59s/entries]
[info] Statistics Collector is done (completion time is 00:01:45.92)
[info] Starting Fix zp_comp Encoding
[info] Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] No shifts available for layer yolov8s/conv40/conv_op, using max shift instead. delta=0.4800
[info] No shifts available for layer yolov8s/conv40/conv_op, using max shift instead. delta=0.2400
[warning] Reducing output bits of yolov8s/conv42 by 7.0 bits (More than half)
Traceback (most recent call last):
File “/home/tia/hailodfc/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/tia/hailo_model_zoo/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 326, in optimize_model
runner.optimize(calib_feed_callback)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 2093, in optimize
self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1935, in _optimize
self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1045, in full_quantization
self._full_acceleras_run(self.calibration_data, data_type)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1229, in _full_acceleras_run
optimization_flow.run()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 306, in wrapper
return func(self, *args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 326, in run
step_func()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py”, line 124, in parent_wrapper
func(self, *args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 345, in step1
self.core_quantization()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 403, in core_quantization
self._create_hw_params()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 451, in _create_hw_params
create_hw_params.run()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/optimization_algorithm.py”, line 50, in run
return super().run()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/algorithm_base.py”, line 150, in run
self._run_int()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/create_hw_params/create_hw_params.py”, line 340, in _run_int
comp_to_retry = self._create_hw_params_component(matching_component_group)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/create_hw_params/create_hw_params.py”, line 210, in _create_hw_params_component
retry_negative_exp_list = self._hanlde_negative_exponent(layer, matching_component_group)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/create_hw_params/create_hw_params.py”, line 226, in _hanlde_negative_exponent
algo.run()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/optimization_algorithm.py”, line 50, in run
return super().run()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/algorithm_base.py”, line 150, in run
self._run_int()
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/neg_exponent_fixer/neg_exp_fixer.py”, line 77, in _run_int
l_fix = self.fix_output(l_fix)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/neg_exponent_fixer/neg_exp_fixer.py”, line 123, in fix_output
self._log_negative_exponent_shift(l_fix)
File “/home/tia/hailodfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/neg_exponent_fixer/neg_exp_fixer.py”, line 227, in _log_negative_exponent_shift
raise NegativeSlopeExponentNonFixable(
hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8s/conv53 due to unsupported required slope. Desired shift is 8.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training.

Hey @NUR_BINTI_RAHMAT,

Welcome to the Hailo Community!

This is an issue with how your latest ONNX export is handling quantization during compilation.

That error you’re seeing is pretty specific:

NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8s/conv53 due to unsupported required slope. Desired shift is 8.0, but op has only 8 data bits.

What’s happening is that one of your convolution layers (conv53) needs a scaling factor during quantization that just doesn’t fit into the 8-bit range available. This usually comes down to a mismatch between your calibration data and the model’s expectations - things like improper normalization, batch norm issues, or skewed weights.

Here’s what I’d suggest trying:

First, check your calibration dataset:
You mentioned using 64 images, but you really want at least 1024 diverse, representative images. More importantly, make sure they’re preprocessed exactly the same way as during training - same normalization (0-1 range), color channel order (RGB vs BGR), resizing, etc. Using random or mismatched images almost always causes these quantization failures.

Re-export your ONNX model:
Make sure you’re exporting with fused BatchNorm layers. In PyTorch, this usually means setting your model to eval mode before export. If you fine-tuned the YOLO model, double-check you’re not accidentally exporting a partially trained version.

Try some workarounds:
Since you’re already at optimization level 0, you could try forcing FP16 for the problematic layers using mixed precision. The Model Zoo has examples of how to modify the .alls file for this.

Quick next step:
Try re-exporting your best_ir8.onnx (making sure BatchNorms are fused), gather a proper calibration dataset of around 1000 real samples, and run the compile again.

Hope this helps!