Hi, there! I faced an issue when I was compiling my custom onnx to hef.
I ran the following command:
hailomz compile yolov8n --hw-arch hailo8 --har ./best.har --classes 1 --calib-path ./path/to/calibration/imgs/dir/ --ckpt ./weights/best.onnx
, it showed the error:
ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.
Call arguments received by layer "yolov8_nms_postprocess" (type HailoPostprocess):
• inputs=['tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)', 'tf.Tensor(shape=(None, 80, 80, 80), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 80), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 80), dtype=float32)']
• training=False
• kwargs=<class 'inspect._empty'>
But when I ran the following command:
hailomz compile yolov8n --hw-arch hailo8 --har ./yolov8n.har --classes 1 --calib-path ./path/to/calibration/imgs/dir/ --ckpt ./weights/best.onnx
, it worked and generated a (.hef) file.
I am wondering why I should use “yolov8n.har” instead of my custom .har file, which generated from the commad:
hailomz parse --hw-arch hailo8 --ckpt ./best.onnx yolov8n