Questions about calib dataset

For my custom semantic segmentation model, I am following the conversion pipeline from PyTorch to ONNX, HAR, HAR optimized, and finally HEF.

I checked that my torch model and one model produce the same outputs.

My torch model takes a normalized image as input in the dataloader (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) and produces an output.

In this case, when I optimize har with calib data with the command:
hailo optimize --hw-arch hailo8 --calib-set-path …/data_npy/calib_512_1500.npy PIDNet.har

should I use the normalized numpy array (created with the same normalization used with Pytorch) for the calibration data npy file rather than using the real image values in the [0, 255] range?
Note that my custom model doesn’t have an input normalization layer.

I proceeded this way, but the segmentation results from using my custom HEF are performing very poorly.

Welcome to the Hailo Community @jhyuk!

The best approach is to add the normalization to the model so it can be performed on the Hailo device. This can be done by adding a normalization command to the model script (.alls file). Please note that the mean and std values should be in int for the command (multiplied by 255).

I have created a callib_dataset.npy now i don’t know how to create the (model.alls) and what to write may i know by one example step by step so that i can write for any of the model out there in the onnx format.
Thanks

Maybe our optimization tutorial can help.

Let us know if you still have any questions after going over it.

How do i get these values for any model.
alls = “normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])\n”

The mean and std will be whatever was used for the training. If it’s in floating point between 0 and 1, multiply it by 255.

Thanks i just wanted to know is there any guide where i’ll be a able to know how to create this file for any model yolov8s.alls

hailo optimize yolov8s.har --calib-set-path calib_set.npy --model-script yolov8s.alls

The content of this is

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])

change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess("./yolov8s_nms_config.json", meta_arch=yolov8, engine=cpu)

#allocator_param(merge_min_layer_utilization=0.01)
#allocator_param(automatic_ddr=False)

context_switch_param(mode=disabled)
performance_param(fps=250)

so this i get from the model zoo but my use case is i want to convert any open source onnx model to hef so for that this optimization command

hailo optimize yolov8s.har --calib-set-path calib_set.npy --model-script yolov8s.alls

so how do i create this for any model available out there in open source.and anyother details file if needed for this step to create quantized har file

Regards
Avinash

Normalization is the only command we recommend always to be in the model script.

The rest of the commands you see in the model script are optional and will vary from model to model.

The ones you see being used for yolov8 are there for improving accuracy, fps and to add the postprocess to be performed on the host.

You can read about model script commands here.