Need help on ONNX to Hef

Hi all,

I’m currently working on converting a YOLOv8 ONNX model to HEF format using the Hailo toolchain.

I have the hailo_ai_sw Docker running. I trained the model on Google Colab, exported it to ONNX, and then tried to load it using hailomz_compile. However, it gave an error saying the ONNX model version is 11, which is higher than the supported version 10.

To work around this, I modified the model’s ir_version to 10 and saved it again. After that, it did load, but I ran into the following error:

===========================================================================
• inputs=[‘tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 80, 80, 3), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 3), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 3), dtype=float32)’]
• training=False
• kwargs=<class ‘inspect._empty’>
(hailo_virtualenv) hailo@kunxianhuan

Then, I tried training directly using the Hailo Model Zoo YOLOv8 training Docker with the same dataset. But for some reason, the training loss stays as nan across all epochs — the model isn’t learning.

Any guidance would be really appreciated!

Hey @Henry_Huang,

Issue 1: ir_version = 11 not supported by hailomz_compile

When you modify the ir_version manually in the ONNX file, you’re only changing the metadata - it doesn’t actually downgrade the graph opset versions. So the compatibility issue remains.

You need to export the model using ONNX opset 10 from your training environment:

yolo export format=onnx opset=10 imgsz=640 model=yourmodel.pt

Just make sure your Colab has compatible versions of ultralytics and onnx.

For issue 2:
Can you share the run log or more details? This could be caused by a bunch of different things like:

  • Learning rate set too high for the new dataset
  • Dataset annotations with invalid values (like negative bounding boxes or bad labels)
  • Custom dataset format not matching what Hailo expects for YOLO format

If you can post the actual error or log, I can help narrow down what’s going wrong.

Hi Omria,

thank you for the reply, i’ve tried that method using below command on colab


!yolo export model=best_75ep.pt imgsz=640 format=onnx opset=10  

when i load it on the hailomz compiler still give me the same error

raise Exception(f"Encountered error during parsing: {err}") from None
Exception: Encountered error during parsing: Your model ir_version 11 is higher than      the checker's (10).

below is how i install and import ultralytics on colab

!pip install ultralytics==8.2.103 -q
from IPython import display
display.clear_output()
import ultralytics
ultralytics.checks()

ouput

Ultralytics YOLOv8.2.103 🚀 Python-3.11.12 torch-2.6.0+cu124 CUDA:0 (Tesla T4, 15095MiB)
Setup complete ✅ (2 CPUs, 12.7 GB RAM, 41.4/112.6 GB disk)

Oh ok, what ONNX version are you using? Because from ONNX 1.10 onwards, the IR is always 11 or higher, and because of this the opset doesn’t matter since the IR is also 11.

Try using an older version of ONNX — Use onnx==1.9.0 or 1.8.1 in your Colab environment. These versions will export models using ir_version=10 by default.