Data understanding on hailo quantization

hye everyone,

got a question, during quantization the dataset should be in floating point inorder the quantization process so quantization algorithm understand the range and distribution of the data that will map the floating-point values to integers.

  1. if my understand above is correct then after i compiled the quantized model to .hef when infer on hailo8l on raspberry pi why it requires to convert the input to int8 doesnt it should be integrated inside the quantizaiton algorithm.

  2. if im wrong feeding int 8 input will obviously provide incorrect raw data so should i process it to get floating point output does that how it works or all the output from the model should be closely same whether it is not quantized or quantized

I hope to get an example on yolo model convertion step by step example

Hey @SAN

When you quantize a model to run on the Hailo-8L, it’s important to understand that the quantized model expects the input data to be in INT8 format during inference. This is because the quantization process maps the floating-point values to integers using scale and zero-point values.

So, even though you use floating-point data during the model training and quantization process, you need to convert the input data to INT8 format before running inference on the Hailo-8L. If you don’t preprocess the input data, it won’t align with the model’s quantized values, and you’ll get incorrect results.

Here’s a quick overview of the workflow:

  1. Train your model using floating-point data.
  2. Quantize the model using a representative dataset to get the scale and zero-point values.
  3. When you’re ready to run inference, preprocess your input data by normalizing it and converting it to INT8 using the scale and zero-point values.
  4. Run inference on the Hailo-8L using the quantized model and the preprocessed INT8 input data.
  5. If needed, postprocess the model’s output by converting it back to floating-point.

The quantized model should perform very similarly to the original floating-point model, with only minor precision loss.

Let me know if you have any other questions!

Best Regards,
Omria

Noted but how you guys get the scale and zero point value is that values we get after quantization after this runner.optimize(calib_dataset_dict)

right now im not doing any post process because the end node is 6 conv from yolov8n so without any post process the raw result should be same right before quantization and after quantization if i follow the step correctly for preprocessing and quantization

  1. How do we obtain the scale and zero point value is that running after this runner.optimize(calib_dataset_dict).

  2. alls_lines = [

    add normalization layer

    Batch size is 8 by default

    “normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n”,
    “resize_input1= resize(resize_shapes=[640,640])\n”,
    “model_optimization_flavor(optimization_level=2, compression_level=2, batch_size=8)\n”,
    ]

with this model script i follow the correct pre processing and quantization the output from the 6 conv i should get the same before and after quantization right?