LightGlue self-attention block - parsed model produces different outputs

For the sequential calibration, I am doing:

with runner_infer.infer_context(InferenceContext.SDK_QUANTIZED) as ctx:
                    infer_results = runner_infer.infer(ctx, calib_dataset_dict)

but now I realized that if this inference is an emulation and there are non-negligible discrepancies between this and the real hardware, the errors could compound too much when chaining many blocks. So I guess that in my case, I should obtain the sequential calibration data using the actual Hailo accelerator.

What do you think?