Can not use hailomz to compile the model

I use hailort 4.20, so I used hailo_model_zoo v2.14

(.hailo_convert) jiahao@jiahao-pc:~/hat_detection/hailo_model_zoo$ hailomz parse --hw-arch hailo8 --ckpt ./yolov8n_best.onnx yolov8n
[info] No GPU chosen and no suitable GPU found, falling back to CPU.
<Hailo Model Zoo INFO> Start run for network yolov8n ...
<Hailo Model Zoo INFO> Initializing the runner...
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.04)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.13)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.41)
[info] According to recommendations, retrying parsing with end node names: ['/model.22/Concat_3'].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.02)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.11)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'yolov8n/input_layer1'.
[info] End nodes mapped from original model: '/model.22/Concat_3'.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.46)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.02)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.13)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'yolov8n/input_layer1'.
[info] End nodes mapped from original model: '/model.22/cv2.0/cv2.0.2/Conv', '/model.22/cv3.0/cv3.0.2/Conv', '/model.22/cv2.1/cv2.1.2/Conv', '/model.22/cv3.1/cv3.1.2/Conv', '/model.22/cv2.2/cv2.2.2/Conv', '/model.22/cv3.2/cv3.2.2/Conv'.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.48)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
Traceback (most recent call last):
  File "/home/jiahao/.hailo_convert/bin/hailomz", line 33, in <module>
    sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
  File "/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py", line 122, in main
    run(args)
  File "/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py", line 111, in run
    return handlers[args.command](args)
  File "/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 203, in parse
    parse_model(runner, network_info, ckpt_path=args.ckpt_path, results_dir=args.results_dir, logger=logger)
  File "/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 146, in parse_model
    runner.save_har(results_dir / f"{network_info.network.network_name}.har")
AttributeError: 'NoneType' object has no attribute 'save_har'

(.hailo_convert) jiahao@jiahao-pc:~/hat_detection/hailo_model_zoo$ hailomz info mobilenet_v1
[info] No GPU chosen and no suitable GPU found, falling back to CPU.
<Hailo Model Zoo INFO> Start run for network mobilenet_v1 ...
<Hailo Model Zoo INFO> 
	task:                    classification
	input_shape:             224x224x3
	output_shape:            1001
	operations:              1.14G
	parameters:              4.22M
	framework:               tensorflow
	training_data:           imagenet train
	validation_data:         imagenet val
	eval_metric:             Accuracy (top1)
	full_precision_result:   70.97
	source:                  https://github.com/tensorflow/models/tree/v1.13.0/research/slim
	license_url:             https://github.com/tensorflow/models/blob/master/LICENSE

while even I use default yolov8n, it can not work.
(.hailo_convert) jiahao@jiahao-pc:~/hat_detection$ hailomz parse --hw-arch hailo8 yolov8n
[info] No GPU chosen and no suitable GPU found, falling back to CPU.
Start run for network yolov8n …
Initializing the runner…
yolov8n.zip: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.5M/10.5M [00:03<00:00, 3.20MB/s]
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.04)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.14)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.58)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.02)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.10)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.62)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.02)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.12)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.63)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
Traceback (most recent call last):
File “/home/jiahao/.hailo_convert/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 203, in parse
parse_model(runner, network_info, ckpt_path=args.ckpt_path, results_dir=args.results_dir, logger=logger)
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 146, in parse_model
runner.save_har(results_dir / f"{network_info.network.network_name}.har")
AttributeError: ‘NoneType’ object has no attribute ‘save_har’

I made a mistake, install the wrong version of DFC.

Now I install like below:

(.hailo_convert) jiahao@jiahao-pc:~/hat_detection$ hailomz --version
Hailo Model Zoo v2.14.0
(.hailo_convert) jiahao@jiahao-pc:~/hat_detection$ hailo --version
[info] Current Time: 17:07:12, 10/21/25
[info] CPU: Architecture: x86_64, Model: 11th Gen Intel(R) Core™ i7-11700F @ 2.50GHz, Number Of Cores: 16, Utilization: 1.5%
[info] Memory: Total: 31GB, Available: 20GB
[info] System info: OS: Linux, Kernel: 6.8.0-85-generic
[info] Hailo DFC Version: 3.30.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running hailo --version
Hailo Dataflow Compiler v3.30.0
But it still not work:

(.hailo_convert) jiahao@jiahao-pc:~/hat_detection$ hailomz compile  yolov8n --hw-arch hailo8 --har ./yolov8n.har
 Start run for network yolov8n …
 Initializing the hailo8 runner…
 Preparing calibration data…
[info] Loading model script commands to yolov8n from /home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
Traceback (most recent call last):
File “/home/jiahao/.hailo_convert/bin/hailomz”, line 33, in 
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile
runner = ensure_optimized(runner, logger, args, network_info)
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in ensure_optimized
optimize_model(
File “/home/jiahao/hat_detection/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 353, in optimize_model
runner.optimize(calib_feed_callback)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 2128, in optimize
self.optimize(calib_data, data_type=data_type, work_dir=work_dir)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1970, in optimize
self.sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1125, in full_quantization
self.full_acceleras_run(self.calibration_data, data_type)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1319, in full_acceleras_run
optimization_flow.run()
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 306, in wrapper
return func(self, *args, **kwargs)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 335, in run
step_func()
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py”, line 123, in parent_wrapper
self.build_model()
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 249, in build_model
model.compute_output_shape(shapes)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1091, in compute_output_shape
return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1125, in compute_and_verify_output_shape
layer_output_shape = layer.compute_output_shape(layer_input_shapes)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/keras/engine/base_layer.py”, line 917, in compute_output_shape
outputs = self(inputs, training=False)
File “/home/jiahao/.hailo_convert/lib/python3.10/site-packages/keras/utils/traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/autograph_generated_fileurl437nm.py", line 41, in tf__call
outputs = ag.converted_call(ag.ld(self).call_core, (ag.ld(inputs), ag.ld(training)), dict(**ag_.ld(kwargs)), fscope)
File "/tmp/autograph_generated_filefv4s66c7.py", line 90, in tf__call_core
ag.if_stmt(ag__.ld(self).postprocess_type in [ag__.ld(PostprocessType).NMS, ag__.ld(PostprocessType).BBOX_DECODER], if_body_3, else_body_3, get_state_3, set_state_3, (‘do_return’, ‘retval_’), 2)
File "/tmp/autograph_generated_filefv4s66c7.py", line 22, in if_body_3
retval = ag_.converted_call(ag__.ld(self).bbox_decoding_and_nms_call, (ag__.ld(inputs),), dict(is_bbox_decoding_only=ag__.ld(self).postprocess_type == ag__.ld(PostprocessType).BBOX_DECODER), fscope)
File "/tmp/autograph_generated_fileuxsqbv40.py", line 116, in tf__bbox_decoding_and_nms_call
ag.if_stmt(ag__.ld(self).meta_arch in [ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5, ag__.ld(NMSOnCpuMetaArchitectures).YOLOX], if_body_5, else_body_5, get_state_5, set_state_5, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’, ‘inputs’), 4)
File "/tmp/autograph_generated_fileuxsqbv40.py", line 113, in else_body_5
ag.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5_SEG, if_body_4, else_body_4, get_state_4, set_state_4, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’), 4)
File "/tmp/autograph_generated_fileuxsqbv40.py", line 110, in else_body_4
ag.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV8, if_body_3, else_body_3, get_state_3, set_state_3, (‘decoded_bboxes’, ‘detection_score’), 2)
File "/tmp/autograph_generated_fileuxsqbv40.py", line 69, in if_body_3
(decoded_bboxes, detection_score) = ag.converted_call(ag__.ld(self).yolov8_decoding_call, (ag__.ld(inputs),), dict(offsets=[0.5, 0.5]), fscope)
File "/tmp/autograph_generated_file1n8jf9z1.py", line 87, in tf__yolov8_decoding_call
decoded_bboxes = ag.converted_call(ag__.ld(tf).expand_dims, (ag__.ld(decoded_bboxes),), dict(axis=2), fscope)
ValueError: Exception encountered when calling layer “yolov8_nms_postprocess” (type HailoPostprocess).

in user code:

File "/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_none_nn_core_layer.py", line 45, in call  *
    outputs = self.call_core(inputs, training, **kwargs)
File "/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 123, in call_core  *
    is_bbox_decoding_only=self.postprocess_type == PostprocessType.BBOX_DECODER,
File "/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 157, in bbox_decoding_and_nms_call  *
    decoded_bboxes, detection_score = self.yolov8_decoding_call(inputs, offsets=[0.5, 0.5])
File "/home/jiahao/.hailo_convert/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 375, in yolov8_decoding_call  *
    decoded_bboxes = tf.expand_dims(decoded_bboxes, axis=2)

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

Call arguments received by layer “yolov8_nms_postprocess” (type HailoPostprocess):
• inputs=[‘tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 80, 80, 1), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 1), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 1), dtype=float32)’]
• training=False
• kwargs=<class ‘inspect._empty’>

Forget to add flag –classes, everything works well

1 Like