Normalization layer in .alls

Hi, I followed this guide Guide to using the DFC to convert a modified YOLOv11 on Google Colab - Guides - Hailo Community for the compilation of a custom YOLOv11. After the compilation, I noticed a significant degradation in performance, so I’m analyzing the process in depth.

I have a question: in the .alls file, a normalization layer was included, and the calibration dataset was normalized. On page 28 of the DFC Section 4.3.2 In-Depth Optimization Tutorial, there is a note saying: “The format of the calibration set is the same as was used as inputs for the modified model. For example, if a normalization layer has been added to the model, the calibration set should not be normalized.”

So I’m a little confused. If I added a normalization layer to the .alls file, should I normalize the calibration dataset? Another question: In this case, should the dataset I used for model fine-tuning also be normalized?

Thanks in advance!

Hi @dgarrido,

When using the normalization command in the model script, the calibration dataset must be provided without normalization.
In the first post of the thread you linked, there is an extra step that my be confusing:

As you can see, calib_data is created with already normalized values, but then the data are immediately de-normalized by multiplying them by 255.0. These steps are redundant:. You could simply do something like this:

for img_name in os.listdir(image_dir):
    img_path = os.path.join(image_dir, img_name)
    if img_name.lower().endswith(('.jpg', '.jpeg', '.png')):
        img = Image.open(img_path).resize((640, 640))  # Resize to desired dimensions
        img_array = np.array(img)  # No normalization
        calib_data.append(img_array)

And use the generated calib_data for the optimization step.

1 Like