Questions about calib dataset

For my custom semantic segmentation model, I am following the conversion pipeline from PyTorch to ONNX, HAR, HAR optimized, and finally HEF.

I checked that my torch model and one model produce the same outputs.

My torch model takes a normalized image as input in the dataloader (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) and produces an output.

In this case, when I optimize har with calib data with the command:
hailo optimize --hw-arch hailo8 --calib-set-path …/data_npy/calib_512_1500.npy PIDNet.har

should I use the normalized numpy array (created with the same normalization used with Pytorch) for the calibration data npy file rather than using the real image values in the [0, 255] range?
Note that my custom model doesn’t have an input normalization layer.

I proceeded this way, but the segmentation results from using my custom HEF are performing very poorly.

Welcome to the Hailo Community @jhyuk!

The best approach is to add the normalization to the model so it can be performed on the Hailo device. This can be done by adding a normalization command to the model script (.alls file). Please note that the mean and std values should be in int for the command (multiplied by 255).