My accuracy slightly improved after quantization

The accuracy of one of my model slightly improved by 2% after quantization. How can it be possible ? I used different datasets for calibration (64 images) and testing (2000 images). Did I do anything wrong during the conversion process.
Note that I have several models performing the same tasks, using the same testing infrastructure and only a few of them show this behaviour after quantization. So could it be related to the way the models are designed or trained ?
I noticed that none of the models available on the Hailo Model Zoo exhibit this behaviour: the accuracy of the quantized model is always below or equal to the native model.

Hi Victor,
If you are doing Quantization Aware Training (QAT), then susrpassing the native accuracy can happen pretty often, since you add more training epochs.
If you are doing.
Other than that, there is no ‘magic’, reducing the numeric representation cannot increase the accuracy (under the same conditions, as in Post Traing Quantization (PTQ)). Usually when you see increase in accuracy it’s associated to the test-set not being exactly the same or not fully representational of the task. As you’ve written on the large datasets (COCO, ImageNet) we do not see this when doing PTQ.

Thanks ! I was using post training quantization (PTQ) with fine tuning. I’ll look into my test dataset then.