How to troubleshoot qp_scale and qp_zp becoming 0 in Yolov3/v5 HEF files created with a custom dataset?

Hi,

I am creating Hailo-15 HEF files for Yolov3 and Yolov5 using my own dataset. After performing optimization and compilation with hailomz, the qp_scale and qp_zp of the OutputVstream became 0. Could there be an issue with the content of my dataset?

Additionally, it seems that the qp_scale and qp_zp in the following Hailo-8 HEF file of HailoModelExplorer are also set to 0:
https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.11.0/hailo8/yolov3.hef

Please provide any advice or suggestions.

1 Like

Hello Tsugihiko,

In order to identify the issue, could you kindly share the dataset you’re using and the script you executed in the model zoo , and both yolov5 and yolov3 that you got ?

1 Like

Hello omria,

Thank you for your response.

I apologize, but I cannot provide the dataset itself as it belongs to our client. What specific information would be helpful regarding the dataset? Would details such as the number of images and their sizes be sufficient?

I will check with our client regarding the HEF files. In the meantime, the quant_info for the OutputVStream is as follows:
For Yolov3, the quant_info values are as follows: {qp_zp = 0, qp_scale = 0, limvals_min = 0, limvals_max = 0}
For Yolov5, the quant_info values are as follows: {qp_zp = 0, qp_scale = 0.0039, limvals_min = 0, limvals_max = 0.99}

Thank you in advance.

1 Like

Hello Tsugihiko,

The data indicates that normalization is being performed twice: once during the pre-processing stage and again within the model script. I recommend removing the normalization step from the pre-processing and then recreating the hefs

Hopefully, this fixes the issue, please let me know.

1 Like

Hello, omria

I have confirmed with the client, and the situation is as follows:

Situation:

I want to retrain yolov3_416 for Hailo-15H using the Hailo Model Zoo.
The generated HEF files have qp_scale and qp_zp values of 0.
The procedure for generating the HEF files is as follows:

  1. python hailo_model_zoo/datasets/create_ccoo_tfrecord.py val2017
  2. python hailo_model_zoo/datasets/create_ccoo_tfrecord.py calib2017
  • An error occurred regarding the installation path of the COCO dataset, but it has been resolved.
  1. I have modified the cfg/alls/generic/yolov3_416.yaml file to set the batch size to 1 as below:
    model_optimization_config(calibration, batch_size=1, calibset_size=64)

  2. hailomz parse yolov3_416

  3. hailomz optimize yolov3_416 --har yolov3_416.har --hw-arch=hailo15h

  4. hailomz compile yolov3_416 --har yolov3_416.har --hw-arch=hailo15h

GPU in use:
GPU (NVidia Quadro P2200)

After comparing several scenarios, the following results were obtained:

         CPU GPU
 Hailo-8   OK  NG
 Hailo-15H  NG  NG

Using the same commands mentioned earlier, when generating the HEF file for Hailo-8 on CPU, the qp_scale and qp_zp values are set correctly, and object detection works fine.

What could be the cause of this issue?

Hi @tsugihiko_haga,
My name is Omer, I’m an Application engineer in Hailo’s CS team.

Can you please specify what is the current state? from what I understand, this is the scenario but I might be wrong:
When retraining yolov3 with your custom dataset using your GPU, you get qp_scale and qp_zp of 0, with lim_vals of 0 and therefore bad accuracy.
When you perform Hailo optimization for h8 on CPU when using the MZ dataset, the result is OK, but when optimizing using the GPU, the results are not good.
When you perform Hailo optimization for h15 either on CPU or using the GPU with the MZ dataset, the results are not good.

Is what I described correct?

BTW - why does your customer uses yolov3 for detection and not one of the more advanced models like yolov8? it have much better accuracy and is built of less parameters.

Regards,

Hello Omer,

Thank you for your response.

The states of “OK” and “not good” are as you described.
We are using the same COCO dataset for all scenarios.

The reason why the customer is using yolov3-v5 is unknown. It could be that they want to compare it with a device they used in the past.

Thank you.

Hi @tsugihiko_haga,
Can you please open a ticket via the Hailo website in out ticketing system, with the relevant information and attach the relevant files? this way I can take it with our R&D as it might be a problem with how our SW works with the GPU and in general in H15’s SW for this specific model.
Also, what is the GPU that is used?

Regards,

Hello Omer,

What files are required for the attachment?

The customer is using an NVidia Quadro P2200 GPU.

I have also tried the HEF conversion on my PC, and like the customer, I am getting qp_scale and qp_zp values of 0. The environment I used for testing is as follows:

  • Hailo AI SW suite 2024-01 (Docker)
  • Hailo Model Zoo included in the SW suite

Here are the steps we followed: Data Preparation:

python hailo_model_zoo/datasets/create_coco_single_person_tfrecord.py val2017

python hailo_model_zoo/datasets/create_coco_single_person_tfrecord.py calib2017

  • Modify the “calib_set” in hailo_model_zoo/cfg/base/coco.yaml to the actual download path ($HMZ_DATA/models/files/coco/2023-08-03/)

  • Modifications in cfg: Change batch_size to 1 cfg/alls/yolov3_416.alls

model_optimization_config(calibration, batch_size=1, calibset_size=64)

HEF Data Generation:

hailomz parse yolov3_416

hailomz optimize yolov3_416 --har yolov3_416.har --hw-arch=hailo8 or hailo15h

hailomz compile yolov3_416 –- har yolov3_416.har --hw-arch=hailo8 or hailo15h

Results:

Using the same steps, we were able to set correct values for qp_scale and qp_zp for yolov4_leaky and yolov5m_wo_spp_60p.

   Hailo-8 Hailo-15H
CPU  OK    NG
GPU  NG    NG

I apologize for any confusion caused. It seems that the qp values in the HEF file from the Model Explorer are also 0. Could this be related to the issue?
https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.11.0/hailo8/yolov3.hef

Regards,