I am creating Hailo-15 HEF files for Yolov3 and Yolov5 using my own dataset. After performing optimization and compilation with hailomz, the qp_scale and qp_zp of the OutputVstream became 0. Could there be an issue with the content of my dataset?
In order to identify the issue, could you kindly share the dataset you’re using and the script you executed in the model zoo , and both yolov5 and yolov3 that you got ?
I apologize, but I cannot provide the dataset itself as it belongs to our client. What specific information would be helpful regarding the dataset? Would details such as the number of images and their sizes be sufficient?
I will check with our client regarding the HEF files. In the meantime, the quant_info for the OutputVStream is as follows:
For Yolov3, the quant_info values are as follows: {qp_zp = 0, qp_scale = 0, limvals_min = 0, limvals_max = 0}
For Yolov5, the quant_info values are as follows: {qp_zp = 0, qp_scale = 0.0039, limvals_min = 0, limvals_max = 0.99}
The data indicates that normalization is being performed twice: once during the pre-processing stage and again within the model script. I recommend removing the normalization step from the pre-processing and then recreating the hefs
Hopefully, this fixes the issue, please let me know.
I have confirmed with the client, and the situation is as follows:
Situation:
I want to retrain yolov3_416 for Hailo-15H using the Hailo Model Zoo.
The generated HEF files have qp_scale and qp_zp values of 0.
The procedure for generating the HEF files is as follows:
An error occurred regarding the installation path of the COCO dataset, but it has been resolved.
I have modified the cfg/alls/generic/yolov3_416.yaml file to set the batch size to 1 as below:
model_optimization_config(calibration, batch_size=1, calibset_size=64)
After comparing several scenarios, the following results were obtained:
CPU GPU
Hailo-8 OK NG
Hailo-15H NG NG
Using the same commands mentioned earlier, when generating the HEF file for Hailo-8 on CPU, the qp_scale and qp_zp values are set correctly, and object detection works fine.
Hi @tsugihiko_haga,
My name is Omer, I’m an Application engineer in Hailo’s CS team.
Can you please specify what is the current state? from what I understand, this is the scenario but I might be wrong:
When retraining yolov3 with your custom dataset using your GPU, you get qp_scale and qp_zp of 0, with lim_vals of 0 and therefore bad accuracy.
When you perform Hailo optimization for h8 on CPU when using the MZ dataset, the result is OK, but when optimizing using the GPU, the results are not good.
When you perform Hailo optimization for h15 either on CPU or using the GPU with the MZ dataset, the results are not good.
Is what I described correct?
BTW - why does your customer uses yolov3 for detection and not one of the more advanced models like yolov8? it have much better accuracy and is built of less parameters.
Hi @tsugihiko_haga,
Can you please open a ticket via the Hailo website in out ticketing system, with the relevant information and attach the relevant files? this way I can take it with our R&D as it might be a problem with how our SW works with the GPU and in general in H15’s SW for this specific model.
Also, what is the GPU that is used?
I have also tried the HEF conversion on my PC, and like the customer, I am getting qp_scale and qp_zp values of 0. The environment I used for testing is as follows: