Compiling custom onnx to hef on hailo8

Hi, there! I faced an issue when I was compiling my custom onnx to hef.
I ran the following command:

hailomz compile yolov8n --hw-arch hailo8 --har ./best.har --classes 1 --calib-path ./path/to/calibration/imgs/dir/ --ckpt ./weights/best.onnx

, it showed the error:

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

Call arguments received by layer "yolov8_nms_postprocess" (type HailoPostprocess):
  • inputs=['tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)', 'tf.Tensor(shape=(None, 80, 80, 80), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 80), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 80), dtype=float32)']
  • training=False
  • kwargs=<class 'inspect._empty'>

But when I ran the following command:

hailomz compile yolov8n --hw-arch hailo8 --har ./yolov8n.har --classes 1 --calib-path ./path/to/calibration/imgs/dir/ --ckpt ./weights/best.onnx

, it worked and generated a (.hef) file.
I am wondering why I should use “yolov8n.har” instead of my custom .har file, which generated from the commad:

hailomz parse --hw-arch hailo8 --ckpt ./best.onnx yolov8n

Hi @js12459743,
Using the yolov8n.har means that you’re not compiling your model, but probably the default one. I guess that among other things, maybe the nodes that connect the NN model the NMS were changed, and this is the reason that the tool is unable to connect it.
I would suggest to either take a look at the suggestions made by the parser per the output node names. Or, examine the net in netron, identify what are the nodes prior to the NMS and update the network yaml accordingly.