How does normalization differ from pytorch normalization?

When training, I am using the normalization like this:
self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
But the documentation says that it expects the input to be in range [0-255] and uses the normalization like this to do the on-device normalization:

normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.
,→375])\n”,

I also got warning when trying to optimise on images scaled on range [0-1]


[warning] The expected calibration set should be [0, 255] when using an in-net normalization layer, but the range received is [(0.0, 1.0), (0.0, 1.0), (0.0, 1.0)].
[info] The calibration set seems to not be normalized, because the values range is [(-1.0, 1.0), (-1.0, 1.0)].
Since the neural core works in 8-bit (between 0 to 255), a quantization will occur on the CPU of the runtime platform.
Add a normalization layer to the model to offload the normalization to the neural core.
Refer to the user guide Hailo Dataflow Compiler user guide / Model Optimization / Optimization Related Model Script Commands / model_modification_commands / normalization for details.

My question is, since my model is trained on images in the range [0-1], should I use my original normalization, follow the documentation, or apply normalization externally?

Hi @ayush,

When converting a model that was trained with a specific normalization, you have two options:

  • Perform the normalization on the host, and provide normalized (float) data to the Hailo device. It is important applying the same normalization that you used during the training, both for calibration and inference.
  • Add a normalization layer on-chip, and provide uint8 data (not normalized) to the Hailo device. The message from the DataFlow Compiler is related to this case.
    You can add the command below to the model script in order to perform the same normalization that you used during training.
    normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375]).
    The values corresponds to the one you used in PyTorch, but multiplied by 255.