Error with my first compiling attempt.

First time using the Hailo AI Kit so I’m really excited (and dumb).

So the problem is that I was trying to compile a face detection .onnx into a .hef file using the 4.21.0 SDK I installed on my WSL (Docker version).

This was the command I ran:
`

hailomz compile yolov8s --ckpt=/local/workspace/yolov8_special.onnx --hw-arch hailo8l --classes 2 --performance --calib-path /local/workspace/calib

After that, things went really really well until this error pops up:

Traceback (most recent call last):
File “/local/workspace/hailo_virtualenv/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 248, in compile
ensure_optimized(runner, logger, args, network_info)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in ensure_optimized
optimize_model(
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 353, in optimize_model
runner.optimize(calib_feed_callback)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 2201, in optimize
result = self.optimize(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 2020, in optimize
checkpoint_info = self.sdk_backend.full_quantization(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1196, in full_quantization
new_checkpoint_info = self.full_acceleras_run(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1434, in full_acceleras_run
new_checkpoint_info = self.optimization_flow_runner(optimization_flow, checkpoint_info)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 2088, in optimization_flow_runner
optimization_flow.run()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 239, in wrapper
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 357, in run
step_func()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py”, line 154, in parent_wrapper
self.build_model()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 260, in build_model
model.compute_output_shape(shapes)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1153, in compute_output_shape
return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1187, in compute_and_verify_output_shape
layer_output_shape = layer.compute_output_shape(layer_input_shapes)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/keras/engine/base_layer.py”, line 917, in compute_output_shape
outputs = self(inputs, training=False)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/keras/utils/traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/autograph_generated_file71l3aryh.py", line 41, in tf__call
outputs = ag
.converted_call(ag
.ld(self).call_core, (ag
.ld(inputs), ag
.ld(training)), dict(**ag
.ld(kwargs)), fscope)
File "/tmp/autograph_generated_file7e60k_i0.py", line 90, in tf__call_core
ag
.if_stmt(ag
_.ld(self).postprocess_type in [ag__.ld(PostprocessType).NMS, ag__.ld(PostprocessType).BBOX_DECODER], if_body_3, else_body_3, get_state_3, set_state_3, (‘do_return’, ‘retval_’), 2)
File "/tmp/autograph_generated_file7e60k_i0.py", line 22, in if_body_3
retval
= ag
_.converted_call(ag__.ld(self).bbox_decoding_and_nms_call, (ag__.ld(inputs),), dict(is_bbox_decoding_only=ag__.ld(self).postprocess_type == ag__.ld(PostprocessType).BBOX_DECODER), fscope)
File "/tmp/autograph_generated_filey10zt0kk.py", line 116, in tf__bbox_decoding_and_nms_call
ag
.if_stmt(ag__.ld(self).meta_arch in [ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5, ag__.ld(NMSOnCpuMetaArchitectures).YOLOX], if_body_5, else_body_5, get_state_5, set_state_5, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’, ‘inputs’), 4)
File "/tmp/autograph_generated_filey10zt0kk.py", line 113, in else_body_5
ag
.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5_SEG, if_body_4, else_body_4, get_state_4, set_state_4, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’), 4)
File "/tmp/autograph_generated_filey10zt0kk.py", line 110, in else_body_4
ag
.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV8, if_body_3, else_body_3, get_state_3, set_state_3, (‘decoded_bboxes’, ‘detection_score’), 2)
File "/tmp/autograph_generated_filey10zt0kk.py", line 69, in if_body_3
(decoded_bboxes, detection_score) = ag
.converted_call(ag__.ld(self).yolov8_decoding_call, (ag__.ld(inputs),), dict(offsets=[0.5, 0.5]), fscope)
File "/tmp/autograph_generated_filezvw0788s.py", line 87, in tf__yolov8_decoding_call
decoded_bboxes = ag
.converted_call(ag__.ld(tf).expand_dims, (ag__.ld(decoded_bboxes),), dict(axis=2), fscope)
ValueError: Exception encountered when calling layer “yolov8_nms_postprocess” (type HailoPostprocess).

in user code:

File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_none_nn_core_layer.py", line 45, in call  *
    outputs = self.call_core(inputs, training, **kwargs)
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 123, in call_core  *
    is_bbox_decoding_only=self.postprocess_type == PostprocessType.BBOX_DECODER,
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 157, in bbox_decoding_and_nms_call  *
    decoded_bboxes, detection_score = self.yolov8_decoding_call(inputs, offsets=[0.5, 0.5])
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 375, in yolov8_decoding_call  *
    decoded_bboxes = tf.expand_dims(decoded_bboxes, axis=2)

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

Call arguments received by layer “yolov8_nms_postprocess” (type HailoPostprocess):
• inputs=[‘tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 80, 80, 1), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 1), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 1), dtype=float32)’]
• training=False
• kwargs=<class ‘inspect._empty’>

I’d greatly appreciate it if anyone could provide guidance or help on this problem.

Hey @Nguyen_Hung,

Welcome to the Hailo Community!

You’re absolutely on the right track here ! The issue you’re running into during the hailomz compile step is pretty common and usually comes down to a mismatch in the post-processing expectations within the YOLOv8 decoding logic.

What’s happening with that error:

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

This is happening in:

decoded_bboxes = tf.expand_dims(decoded_bboxes, axis=2)

Basically decoded_bboxes is coming back as None, which means the decoding function (yolov8_decoding_call) failed to produce a tensor. This usually happens because of incorrect input tensor shapes or something misconfigured in the YAML model file.

Root causes and how to fix them:

1. ONNX output format mismatch
The YOLOv8 decoder expects tensors with very specific layout, naming, and shape characteristics. If your custom ONNX file doesn’t match what Hailo expects, the postprocess layers break.

Fix: Make sure your ONNX model is exported using Ultralytics’ CLI in the expected format:

yolo export model=best.pt format=onnx opset=11 imgsz=640

2. Missing or incorrect YAML file
The YAML file tells the Model Zoo how to parse your network. If the output node names, layout, or metadata (like postprocess type) is wrong or missing, compilation fails.

Fix: Use or copy hailo_model_zoo/cfg/networks/yolov8s.yaml and adjust it:

network:
  network_name: yolov8s
  outputs:
    - name: yolov8_nms_postprocess
      meta_arch: yolov8
      postprocess: nms

Make sure it maps correctly to the actual output nodes in your ONNX file.

Here’s what I’d try:

  1. Check your ONNX model inputs/outputs:

    python -c "import onnx; model = onnx.load('yolov8_special.onnx'); print([o.name for o in model.graph.output])"
    
  2. Update your YAML
    If the output layer name from above isn’t yolov8_nms_postprocess, you’ll need to update your YAML to reflect the correct name and postprocess type.

Run in this manner and not a direct compile for more info and control over parameter:

hailomz parse yolov8s \
  --ckpt /local/workspace/yolov8_special.onnx \
  --yaml /local/workspace/yolov8_custom.yaml \
  --output-dir /local/workspace/yolov8_parsed

hailomz optimize yolov8s \
  --har /local/workspace/yolov8_parsed/yolov8s.har \
  --yaml /local/workspace/yolov8_custom.yaml \
  --calib-path /local/workspace/calib \
  --output-dir /local/workspace/yolov8_optimized \
  --classes 2

hailomz compile yolov8s \
  --hef-name yolov8s_face \
  --hw-arch hailo8l \
  --optimized-model /local/workspace/yolov8_optimized/yolov8s_optimized.har \
  --output-dir /local/workspace/yolov8_hef

Also, FYI - we’ll be adding face recognition to our apps soon, so if you want a ready HEF file, that might be an option down the road.