How to Improve SNR in Specific Layers of YOLO11s model?

Hi Hailo community,

I’m currently optimizing a model on the Hailo platform and noticed that some layers are showing poor Signal-to-Noise Ratio (SNR) during evaluation. I’m looking for guidance or best practices on how to improve SNR in these layers.

Here are the layers with the worst SNR values:

ew_sub_softmax1: -31.38
ne_activation_ew_sub_softmax1: 0.0
reduce_sum_softmax1: -1.72
ew_mult_softmax1: 0.76
dw1: 9.65
conv_feature_splitter7_2: 5.93
conv41: 8.09
conv49: 7.95
conv50: 4.95
conv51: 9.09
conv_feature_splitter9_2: 9.80
conv56: 8.28
conv53: -0.01
conv54: 6.93
conv58: 8.60
conv60: 8.71
conv61: 4.43
conv62: 7.02
conv64: 6.83
conv65: 7.04
conv76: 9.09
conv78: 3.95
conv79: 1.27
conv80: 0.0

My setup
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])

change_output_activation(conv54, sigmoid)

change_output_activation(conv65, sigmoid)

change_output_activation(conv80, sigmoid)

nms_postprocess(“/content/drive/MyDrive/orangePI/Hailo/yolov11/yolov11_nms_layer_config.json”, meta_arch=yolov8, engine=cpu)

model_optimization_config(calibration, batch_size=16, calibset_size=1024)

pre_quantization_optimization(activation_clipping, layers={*}, mode=percentile, clipping_values=[0.01, 99.99])

pre_quantization_optimization(weights_clipping, layers={*}, mode=percentile, clipping_values=[0.01, 99.99])

pre_quantization_optimization(equalization, policy=enabled)

model_optimization_flavor(optimization_level=3, compression_level=0)

post_quantization_optimization(adaround, policy=enabled, epochs=320, dataset_size=1024, batch_size=128)

I’m particularly concerned about the very low or negative SNR in layers like ew_sub_softmax1, reduce_sum_softmax1, conv53, and conv80.

Any insights, experiences, or tools from the community would be greatly appreciated!

Thanks in advance!

Did you work trough the Optimization tutorials in the Hailo AI Software Suite Docker.

Inside the Docker simply call the following command to start a Jupyter Notebook server with notebooks for all steps of the model conversion flow.

hailo tutorial

Yes, I did. Not sure what to di=o with layers like that

Any thoughts? I have tried different optimization levels, still a lot of false positives, using precision_mode=a16_w16 only worsen the metrics. Did anyone have such a problem with custom yolo? what helped?

Hi @user99

We ported quite a few custom models and have not seen this issue. You can give our cloud compiler a try and see if number of false positives is reduced: Early Access to DeGirum Cloud Compiler

1 Like

Thank you, I will try it, is there the possibility to select the Hailo Dataflow Compiler version? We started project with 3.30.0 version and created env setup for our raspberries, model compilated with 3.31.0 version won’t work on our setup. we don’t want to create new setups with new versions of compiler.

Our cloud compiler is running on 3.30.

2 Likes

I have just tried to compile in cloud compiler yolov11s and got the following results on test set

Overall Metrics:
Total: TP=452, FP=175942, FN=10931
Precision=0.0026
Recall=0.0397
F1 Score=0.0048.

I have a question is normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0]) added to all file?

Hi @user99

Yes, normalization is added to the alls file. On the same test set, what are the metrics for the original checkpoint?

Metrics for the original checkpoint on the test set:

Total: TP=8123, FP=319, FN=3260
Precision=0.9622
Recall=0.7136
F1 Score=0.8195

Hi @user99

After compiling in our cloud compiler, did you try inference in the browser on a few images from the test folder and see how they look? This is to eliminate any potential error in your eval script. Previously, some users got low performance (not this bad but still lower mAP than expected) when they sent BGR images instead of RGB.

Hi @shashi

I have tried inference in browser, results look bad((( I am sure my evaluation script is ok, as I used it to test previously trained YOLOv11s and metrics were good. We got new data and retrained model and now we are struggling to convert it.

Hi @user99

Thanks for checking browser inference and confirming. Not sure what is going on. We have seen 1-2% drop in mAP due to quantization but not a model that is totally wrong.

Hi @shashi Do you plan to add possibility for user to change/edit alls file in cloud compiler to be able to compile with different set ups?

Hi @user99

We have no such plans at this time. Even the limited options we support now take up a lot of effort. However, we can help you on a case-by-case basis if possible.