Optimization error in the docker

I am trying to convert my custom onnx model to hef format. I have installed the docker from the official website and am working with it. I created a har format file at the parsing stage and now I have a problem with its optimization.
(hailo_virtualenv) hailo@sensorama-ubuntu-pc:/local/shared_with_docker$ hailomz optimize --hw-arch hailo8l --har ./yolov8n.har yolov8n
Start run for network yolov8n …
Initializing the hailo8l runner…
Preparing calibration data…
Traceback (most recent call last):
File “/local/workspace/hailo_virtualenv/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 224, in optimize
calib_feed_callback = prepare_calibration_data(
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 283, in prepare_calibration_data
calib_feed_callback = make_calibset_callback(network_info, preproc_callback, calib_path)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 244, in make_calibset_callback
data_feed_cb = _make_dataset_callback(network_info, preproc_callback, calib_path, dataset_name)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 221, in _make_dataset_callback
return _make_data_feed_callback(
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 200, in _make_data_feed_callback
raise FileNotFoundError(f"Couldn’t find dataset in {data_path}. Please refer to docs/DATA.rst.")
FileNotFoundError: Couldn’t find dataset in /local/shared_with_docker/.hailomz/models_files/coco/2021-06-18/coco_calib2017.tfrecord. Please refer to docs/DATA.rst.

Hi @vlad.purhin,

A calibration dataset is needed for the optimization step, it is a small subset that represents the dataset that will be used for inference. For more information on calibsets please refer to our guide.

You can pass the path to the calibration dataset with the --calib-path tag. You can read more about all arguments with hailomz optimize -h

Hello @nina-vilela,
Thank you for your reply and for giving me the documentation. I changed the optimization command, added a reference to my dataset using --calib-path. And now I have a problem:
(hailo_virtualenv) hailo@sensorama-ubuntu-pc:/local/shared_with_docker$ hailomz optimize --hw-arch hailo8l --har ./yolov8n.har --calib-path YOLO_syn_data_06_14_24_v2 --yaml hef_config.yaml
Start run for network yolov8n …
Initializing the hailo8l runner…
Preparing calibration data…
[info] Loading model script commands to yolov8n from /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
Traceback (most recent call last):
File “/local/workspace/hailo_virtualenv/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 227, in optimize
optimize_model(
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 326, in optimize_model
runner.optimize(calib_feed_callback)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 2093, in optimize
self.optimize(calib_data, data_type=data_type, work_dir=work_dir)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1935, in optimize
self.sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1045, in full_quantization
self.full_acceleras_run(self.calibration_data, data_type)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1229, in full_acceleras_run
optimization_flow.run()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 306, in wrapper
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 326, in run
step_func()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py”, line 123, in parent_wrapper
self.build_model()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py”, line 250, in wrapped
result = method(*args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py”, line 240, in build_model
model.compute_output_shape(shapes)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1039, in compute_output_shape
return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py”, line 1073, in compute_and_verify_output_shape
layer_output_shape = layer.compute_output_shape(layer_input_shapes)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/keras/engine/base_layer.py”, line 917, in compute_output_shape
outputs = self(inputs, training=False)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/keras/utils/traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/autograph_generated_fileumm8no94.py", line 41, in tf__call
outputs = ag
.converted_call(ag
.ld(self).call_core, (ag
.ld(inputs), ag
_.ld(training)), dict(**ag__.ld(kwargs)), fscope)
File "/tmp/autograph_generated_file8do5el1r.py", line 90, in tf__call_core
ag
.if_stmt(ag__.ld(self).postprocess_type in [ag__.ld(PostprocessType).NMS, ag__.ld(PostprocessType).BBOX_DECODER], if_body_3, else_body_3, get_state_3, set_state_3, (‘do_return’, ‘retval_’), 2)
File "/tmp/autograph_generated_file8do5el1r.py", line 22, in if_body_3
retval
= ag
_.converted_call(ag__.ld(self).bbox_decoding_and_nms_call, (ag__.ld(inputs),), dict(is_bbox_decoding_only=ag__.ld(self).postprocess_type == ag__.ld(PostprocessType).BBOX_DECODER), fscope)
File "/tmp/autograph_generated_filegmrox_sd.py", line 99, in tf__bbox_decoding_and_nms_call
ag
.if_stmt(ag__.ld(self).meta_arch in [ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5, ag__.ld(NMSOnCpuMetaArchitectures).YOLOX], if_body_4, else_body_4, get_state_4, set_state_4, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’, ‘inputs’), 4)
File "/tmp/autograph_generated_filegmrox_sd.py", line 96, in else_body_4
ag
.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5_SEG, if_body_3, else_body_3, get_state_3, set_state_3, (‘decoded_bboxes’, ‘detection_score’, ‘do_return’, ‘retval_’), 4)
File "/tmp/autograph_generated_filegmrox_sd.py", line 93, in else_body_3
ag
.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV8, if_body_2, else_body_2, get_state_2, set_state_2, (‘decoded_bboxes’, ‘detection_score’), 2)
File "/tmp/autograph_generated_filegmrox_sd.py", line 69, in if_body_2
(decoded_bboxes, detection_score) = ag
.converted_call(ag__.ld(self).yolov8_decoding_call, (ag__.ld(inputs),), None, fscope)
File "/tmp/autograph_generated_fileh6fbpps8.py", line 82, in tf__yolov8_decoding_call
decoded_bboxes = ag
.converted_call(ag__.ld(tf).expand_dims, (ag__.ld(decoded_bboxes),), dict(axis=2), fscope)
ValueError: Exception encountered when calling layer “yolov8_nms_postprocess” (type HailoPostprocess).

in user code:

File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_none_nn_core_layer.py", line 45, in call  *
    outputs = self.call_core(inputs, training, **kwargs)
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 123, in call_core  *
    is_bbox_decoding_only=self.postprocess_type == PostprocessType.BBOX_DECODER,
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 157, in bbox_decoding_and_nms_call  *
    decoded_bboxes, detection_score = self.yolov8_decoding_call(inputs)
File "/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 367, in yolov8_decoding_call  *
    decoded_bboxes = tf.expand_dims(decoded_bboxes, axis=2)

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

Call arguments received by layer “yolov8_nms_postprocess” (type HailoPostprocess):
• inputs=[‘tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 80, 80, 2), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 40, 40, 2), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)’, ‘tf.Tensor(shape=(None, 20, 20, 2), dtype=float32)’]
• training=False
• kwargs=<class ‘inspect._empty’>

What’s in the yaml file that you are using?

hef_config.yaml:
base:

It looks good. The alls_script points to an yolov8n.alls, could you please also share the commands in that file?

I found a single file yolov8n.alls in the directory:
(hailo_virtualenv) hailo@sensorama-ubuntu-pc:/local/shared_with_docker$ find /local -name “yolov8n.alls”
/local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls

@vlad.purhin The error message means that there is an issue with the nms postprocess which is configured in the yolov8n_nms_config.json mentioned in the yolov8n.alls.

This can happen when the model being used has different layer names than the original pre-trained one. You may have to change the bbox decoding layer names (after parsing),:

You can visualize parsed models with netron.app.

Another option is editing the model script to not use the ModelZoo NMS config JSON. This can be done by removing the NMS JSON path from the nms_posprocess command in the model script. We have a preview feature that automatically finds the correct bbox decoding layers during parsing and it is then used during the optimization.