Failed to quantise YOLOV11m model

Hi,

Hi,

I’m trying to quantize a YOLOv11m model using the command below. I increased the representative dataset size from 64 to 10,000 images, but the quantization accuracy still isn’t improving. I also noticed that the distillation loss for the classification output converges to around 1.0. Could this be contributing to the poor quantization results?

I have attached my model script cmds here:

model_script_cmds = [
    "model_optimization_flavor(optimization_level=4, compression_level=0)\n",
    "normalization_auto = normalization([0.0], [255.0])\n",
    "change_output_activation(conv74, sigmoid)\n",
    "change_output_activation(conv90, sigmoid)\n",
    "change_output_activation(conv105, sigmoid)\n",
    "nms_postprocess(meta_arch=yolov8, engine=cpu, classes=3, nms_scores_th=0.25, nms_iou_th=0.65, image_dims=[736, 960])\n",
    "resources_param(max_apu_utilization=0.6, max_compute_16bit_utilization=0.6, max_compute_utilization=0.6, max_control_utilization=0.6, max_input_aligner_utilization=0.6, max_memory_utilization=0.6, max_utilization=0.6)\n"
    "quantization_param([conv74, conv90, conv105], precision_mode=a16_w16)\n",
    "post_quantization_optimization(adaround, policy=disabled)\n",
    "post_quantization_optimization(bias_correction, policy=enabled)\n",
    "post_quantization_optimization(finetune, policy=enabled, learning_rate=0.00001, batch_size=2)\n",
    "platform_param(hints=[low_pcie_bandwidth])\n"
]