How to compile a grayscale 1x1280x1280 YoloV8 model with DFC?

We are using Hailo DFC to compile a grayscale 1x1280x1280 YoloV8n model and end-up with the following error: “yolov8n/input_layer1 has 1 features and given 3 std values. They must be equal.”
What can we do to compile a grayscale model using Hailo DFC?
Thanks for your help,
Klaus
Output
Start run for network yolov8n … Initializing the hailo8l runner… [info] Translation started on ONNX model yolov8n [info] Restored ONNX model yolov8n (completion time: 00:00:00.06) WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip… WARNING: failed to run “Add” op (name is “/model.22/Add”), skip… WARNING: failed to run “Div” op (name is “/model.22/Div”), skip… WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip… WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip… WARNING: failed to run “Add” op (name is “/model.22/Add”), skip… WARNING: failed to run “Div” op (name is “/model.22/Div”), skip… WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip… [info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.38) [info] NMS structure of yolov8 (or equivalent architecture) was detected. [info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv. [info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’. [info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’. [info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.07) [info] Saved HAR to: /home/jupyter/hailoTutorial/yolov8n.har Using generic alls script found in /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls because there is no specific hardware alls Preparing calibration data… [info] Loading model script commands to yolov8n from /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls [info] Loading model script commands to yolov8n from string Traceback (most recent call last): File “/opt/conda/bin/hailomz”, line 33, in sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)()) File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main run(args) File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run return handlersargs.command File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile _ensure_optimized(runner, logger, args, network_info) File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized optimize_model( File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 324, in optimize_model optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes) File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 310, in optimize_full_precision_model runner.optimize_full_precision(calib_data=calib_feed_callback) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func return func(self, *args, **kwargs) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1996, in optimize_full_precision self._optimize_full_precision(calib_data=calib_data, data_type=data_type) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1999, in _optimize_full_precision self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1497, in optimize_full_precision model, params = self._apply_model_modification_commands(model, params, update_model_and_params) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1388, in _apply_model_modification_commands model, params = command.apply(model, params, hw_consts=self.hw_arch.consts) File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_modifications_commands.py”, line 324, in apply return norm_adder.add_normalization_layers() File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/tools/normalization_layers_addition.py”, line 74, in add_normalization_layers raise NormalizationLayersAdditionException( hailo_sdk_client.tools.normalization_layers_addition.NormalizationLayersAdditionException: yolov8n/input_layer1 has 1 features and given 3 std values. They must be equal

Hi @Klaus,

According to the log file, you are using this model script from the Hailo Model Zoo.
That model script includes on-chip normalization on 3 channels:

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])

Please make a copy of the model script, and modify the normalization command for a single input channel (i.e., single values of mean and standard deviation):

normalization1 = normalization([0.0], [255.0])

Finally, use the new model script for the model conversion.
Let me know if this works.

1 Like

Thanks for your hint. We changed the normalization line to
normalization1 = normalization([0.0], [255.0]) and this avoided the old error message. But I received an new error message:
Start run for network yolov8n …
Initializing the hailo8l runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.30)
WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip…
WARNING: failed to run “Add” op (name is “/model.22/Add”), skip…
WARNING: failed to run “Div” op (name is “/model.22/Div”), skip…
WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip…
WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip…
WARNING: failed to run “Add” op (name is “/model.22/Add”), skip…
WARNING: failed to run “Div” op (name is “/model.22/Div”), skip…
WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip…
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.99)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.76)
[info] Saved HAR to: /home/jupyter/hailoTutorial/yolov8n.har
Using generic alls script found in /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls because there is no specific hardware alls
Preparing calibration data…
[info] Loading model script commands to yolov8n from /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
Traceback (most recent call last):
File “/opt/conda/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 324, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 310, in optimize_full_precision_model
runner.optimize_full_precision(calib_data=calib_feed_callback)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1996, in optimize_full_precision
self._optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1999, in _optimize_full_precision
self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1497, in optimize_full_precision
model, params = self._apply_model_modification_commands(model, params, update_model_and_params)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1388, in _apply_model_modification_commands
model, params = command.apply(model, params, hw_consts=self.hw_arch.consts)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 387, in apply
pp_creator = create_nms_postprocess(
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1767, in create_nms_postprocess
pp_creator.prepare_hn_and_weights(hw_consts, engine, dfl_on_nn_core=dfl_on_nn_core)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1125, in prepare_hn_and_weights
super().prepare_hn_and_weights(
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1089, in prepare_hn_and_weights
self.add_postprocess_layer_to_hn()
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1040, in add_postprocess_layer_to_hn
raise NMSConfigPostprocessException(f"The layer {encoded_layer.name} doesn’t have one output layer")
hailo_sdk_client.tools.core_postprocess.nms_postprocess.NMSConfigPostprocessException: The layer yolov8n/conv41 doesn’t have one output layer

We tried to solve the error “The layer yolov8n/conv41 doesn’t have one output layer” by changing yolov8n.alls to the following content:
normalization1 = normalization([0.0], [255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess(meta_arch=yolov8, engine=cpu)
allocator_param(width_splitter_defuse=disabled)
quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])

and received the following error message:

Start run for network yolov8n …
Initializing the hailo8l runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.07)
WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip…
WARNING: failed to run “Add” op (name is “/model.22/Add”), skip…
WARNING: failed to run “Div” op (name is “/model.22/Div”), skip…
WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip…
WARNING: failed to run “Gather” op (name is “/model.22/Gather”), skip…
WARNING: failed to run “Add” op (name is “/model.22/Add”), skip…
WARNING: failed to run “Div” op (name is “/model.22/Div”), skip…
WARNING: failed to run “Mul” op (name is “/model.22/Mul_1”), skip…
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.43)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.19)
[info] Saved HAR to: /home/jupyter/hailoTutorial/yolov8n.har
Using generic alls script found in /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls because there is no specific hardware alls
Preparing calibration data…
[info] Loading model script commands to yolov8n from /home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] The layer yolov8n/conv39 was detected as reg_layer.
Traceback (most recent call last):
File “/opt/conda/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 324, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/jupyter/hailoTutorial/calbdata/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 310, in optimize_full_precision_model
runner.optimize_full_precision(calib_data=calib_feed_callback)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1996, in optimize_full_precision
self._optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1999, in _optimize_full_precision
self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1497, in optimize_full_precision
model, params = self._apply_model_modification_commands(model, params, update_model_and_params)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1388, in _apply_model_modification_commands
model, params = command.apply(model, params, hw_consts=self.hw_arch.consts)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 385, in apply
self._update_config_file(hailo_nn)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 546, in _update_config_file
self._update_config_layers(hailo_nn)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 596, in _update_config_layers
self._set_yolo_config_layers(hailo_nn)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 652, in _set_yolo_config_layers
self._set_anchorless_yolo_config_layer(branch_layers, decoder, f_out)
File “/opt/conda/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 698, in _set_anchorless_yolo_config_layer
raise AllocatorScriptParserException(msg)
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Cannot infer bbox conv layers automatically. Please specify the bbox layer in the json configuration file

Your help is very much appreciated!

Hi @Klaus,
Could you please paste the command you are using for the model conversion?

hailomz compile yolov8n --ckpt “./best.onnx” --hw-arch hailo8l --calib-path /home/jupyter/roboflow/grayscale/valid/images --classes 1 --performance
Hope that helps, @pierrem . Thanks, Klaus

@Klaus the model script you are using tries to add the NMS to the model and it seems something is wrong with it.
The NMS is not really part of the neural network, but it can be added at optimization time to be performed by HailoRT (engine=cpu, in this case).
I see that you changed the nms_postprocess command to use the automatic approach:

nms_postprocess(meta_arch=yolov8, engine=cpu)

This is generally ok, but to be sure it is not causing any issue in this specific case, I would suggest trying with the NMS config JSON file.

nms_postprocess("../../postprocess_config/yolov8n_nms_config.json", meta_arch=yolov8, engine=cpu)

Please try the following:

  • Create a copy of the NMS config JSON

    • Modify the image_dims
    • Set classes to 1
    • Check if the output names changed (it seems like this, from your log), please open the HAR with Netron and adjust the reg_layer and cls_layer name in your config JSON accordingly
  • Create a copy of the model script and modify it this:

    normalization1 = normalization([0.0], [255.0])
    change_output_activation(conv42, sigmoid) # change according to your model
    change_output_activation(conv53, sigmoid) # change according to your model
    change_output_activation(conv63, sigmoid) # change according to your model
    nms_postprocess("<PATH-TO-YOUR-NMS-CONFIG-JSON>", meta_arch=yolov8, engine=cpu)
    
    allocator_param(width_splitter_defuse=disabled)
    
  • Run the command

    hailomz compile yolov8n --ckpt “./best.onnx” --hw-arch hailo8l --calib-path /home/jupyter/roboflow/grayscale/valid/images --classes 1 --model-script <PATH-TO-YOUR-MODEL-SCRIPT>
    

Let me know if this works

1 Like

@pierrem your hint solved our problem. Thanks a lot, Klaus