How to include Normalize (mean/std) preprocessing during ONNX to HEF conversion?

Hi,
I’m working with a custom PyTorch model (MobileNetV2_UNet) which requires normalization during inference using the typical mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].

I’m converting the model to .onnx and then to .hef for deployment on Hailo-8. Currently, I’m using the standard Hailo CLI tools (hailo parser, hailo optimize, and hailo compiler) in a shell script.

However, I would like to embed the normalization operation directly into the .hef, so I don’t have to manually normalize the input in my inference pipeline.

#!/bin/bash
# $ ./run_onnx2hef.sh --onnx weights/20250421/best_simplified.onnx calib_data/calib_data_300.npy
if [ "$#" -lt 2 ]; then
    echo "Usage: $0 --onnx filename.onnx"
    exit 1
fi

if [ "$1" != "--onnx" ]; then
    echo "The first argument must be --onnx"
    exit 1
fi

file="$2"

if [[ "$file" == *.onnx ]]; then
    echo "Input file format is correct"
else
    echo "The provided file is not in .onnx format"
    exit 1
fi

# If you are using Miniconda installed in ~/miniconda3
if [ -f "$HOME/miniconda3/etc/profile.d/conda.sh" ]; then
    source "$HOME/miniconda3/etc/profile.d/conda.sh"
else
    echo "Cannot find conda initialization script"
    exit 1
fi

hailo_model_name="hailo8"

# Use parameter expansion to get filename and extension
filename=$(basename "$file")
name="${filename%%.*}"
extension="${filename##*.}"
output_folder=$(dirname "$file")
msg_folder="$output_folder"/compile_msg_"$hailo_model_name"

echo "Filename: $name"
echo "Extension: $extension"
echo "Folder path: $output_folder"

mkdir -p "$msg_folder"

parse_file="$output_folder"/"$name"_"$hailo_model_name"_parse.har
optimize_file="$output_folder"/"$name"_"$hailo_model_name"_optimized.har
compile_file="$output_folder"/"$name".hef
compile_file_har="$output_folder"/"$name"_"$hailo_model_name"_compiled.har

conda activate hailo

echo "---- Convert (parse) onnx to har ----"
hailo parser onnx "$file" --hw-arch "$hailo_model_name" --har-path "$parse_file" -y

echo "---- Optimize har ----"
hailo optimize "$parse_file" --hw-arch "$hailo_model_name" --calib-set-path "$3" --output-har-path "$optimize_file" --work-dir "$msg_folder"
# hailo optimize "$parse_file" --hw-arch "$hailo_model_name" --calib-set-path "$3" --model-script unet_mobilenet_v2.alls --output-har-path "$optimize_file"
# hailo optimize "$parse_file" --hw-arch "$hailo_model_name" --use-random-calib-set --output-har-path "$optimize_file"

echo "---- Compile har to hef ----"
hailo compiler "$optimize_file" --hw-arch "$hailo_model_name" --output-dir "$output_folder"
mv "$compile_file" "$output_folder"/"$name"_"$hailo_model_name".hef
echo "Compilation complete. Output file: $output_folder/$name"_"$hailo_model_name.hef"


What is the recommended way to do this?

  • Should I include the normalization inside the PyTorch model before exporting to ONNX?
  • Or is there a way to define this preprocessing step using hailomz and a YAML config?

If you could provide a working example or point me to relevant documentation for embedding normalization via hailomz, that would be great.

Thanks in advance!

Your YAML file should point to an alls file that contains the following line:

normalization_in = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])

The values listed are ImageNet mean/std (multiply what you had by 255). Note this means that calibration data should not be pre-normalized.

Where can I find the YAML file ?
I didn’t see it.

Hi @Yen ,
you can check out all the base yaml configs here:

This directory contains most of the model specific configuration files. You can identify one that closely matches your model architecture and modify it accordingly.