Is there a way to finetune specific layers after quantization?

My model has an encoder and two classification heads. While one classification head achieves sufficient accuracy even after quantization through optimization, the other classification head fails to achieve sufficient accuracy.

It is known that there is an issue with the calibration data. The two classification heads have different roles, and performing optimization using calibration data suited for one of them will yield sufficient accuracy for the intended classification head.

Since the encoder functions correctly with either calibration data, we want to quantize one classification head and then, with the encoder frozen, perform quantization only on the other classification head using the different calibration data.

When specifying layers_to_freeze for fine-tuning in post_quantization_optimization, the model is fully re-optimized from the pre-quantization state. Is there a way to fine-tune only parts of the quantized model while freezing other parts?