Can adaround and finetuning be combined in the same .alls script?

Hi Hailo Community,

I am currently working on quantizing a highly sparse 3D LiDAR Object Detection model (PillarNest trained with nuscenes) to deploy on the Hailo-8L.

Due to the inherent sparsity of point cloud BEV features (around 90% zeros) and extreme outliers, standard PTQ methods cause a severe drop in accuracy and SQNR.

So far, I have achieved an excellent global SQNR (~16.5 dB) and Cosine Similarity (>0.99) by doing the following in my .alls script:

  1. Percentile Clipping [0.0, 99.99] to protect the structural negative values while filtering extreme positive outliers.

  2. End-to-End Fine-Tuning (bias_only=True, def_loss_type=l2) focused only on the worst performing output layers, while freezing the backbone and the neck.

However, based on recent literature (such as the LiDAR-PTQ paper), the optimal quantization strategy for these architectures involves combining a local weight rounding optimization (which maps to Hailo’s adaround) with a global task-guided loss (which maps to Hailo’s finetune).

And I would like to know if it is posible to add adaround to de post training config as in this .alls, in my experience adaround is skipped when finetune is being applyed

# 1. OPTIMIZACIÓN BASE

model_optimization_flavor(optimization_level=0, compression_level=0)
model_optimization_config(calibration, calibset_size=64, batch_size=1)
model_optimization_config(checker_cfg, policy=disabled)
pre_quantization_optimization(equalization, policy=disabled)
pre_quantization_optimization(activation_clipping, layers={conv*}, mode=percentile, clipping_values=[0.0, 99.99])

# 2. Local Weight Rounding Optimization
post_quantization_optimization(adaround, 
    policy=enabled, 
    dataset_size=64, 
    epochs=50, 
    train_bias=False
)

# 2. FINE-TUNING in layers with SQNR < 15dB
post_quantization_optimization(finetune, 

    policy=enabled, 
    dataset_size=64, 
    batch_size=1, 
    epochs=5,                
    learning_rate=0.0001,    
    optimizer=sgd,          
    layers_to_freeze=[...],

    bias_only=True,
    def_loss_type=l2,        
    loss_layer_names= [...]

    )

Hi @Ines_Perez,

Great progress on your PillarNest quantization.

Regarding combining adaround and finetune - they are mutually exclusive in a single optimization pass, which is why adaround is skipped when finetune is enabled.

Worth trying:

  • Set up a benchmark with 3-way comparison (FP vs. labels, quantized vs. labels, the diff is your degradation).
  • Try a full finetune first (not just bias_only=True) - tuning all weights with knowledge distillation often recovers more accuracy than bias-only tuning.
  • Experiment with different parameter combinations (learning rate, epochs, loss types, dataset size).
  • Separately, try a full adaround pass and compare results.
  • In the case where degradation remains unacceptable after these steps, it’s possible to extract the Keras model using runner.get_keras_model() and implement a custom optimization strategy (like the LiDAR-PTQ approach) directly.

Thanks,
Michael.