Error encountered during onnx to hef converstion

Hello Everyone,

I’ve Installed Hailo Software Suite 2024-07 in my ubuntu box.

I encountered issue during the onnx to hef conversion

Command I executed inside the hailo software suite : hailomz compile --ckpt /local/workspace/chip_v2.onnx --calib-path /local/workspace/images/ --yaml /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8n.yaml --start-node-names images --end-node-names /model.22/Sigmoid /model.22/dfl/Reshape_1 --classes 80

Error I faced :

Please help me here, Thanks in advance

Hi @bhaskarreddy,

Are the layer names in your model different from the ones in the pre-trained version of the model? If not, passing the -start-node-names and -end-node-names is unnecessary as it’s already part of the yaml file.

Hi @nina-vilela,

I tried to remove -start-node-names and -end-node-names from command.

Now, encountered with other issue.

@bhaskarreddy just to get a bit more info, chip_v2 is a re-trained yolov8n, right? How was it re-trained? Was it re-trained with 80 classes?

Sorry @nina-vilela , It was my bad. classes for my retrained model is 4.

Now, When I gave 4 as number of classes I got below error.

Q: How was it re-trained?
A: chip_v2 was re-trained by using yolov8n, by using cli : yolo train data=data.yaml model=yolov8n.pt epochs=100 imgsz=640

Q: Was it re-trained with 80 classes?
A: 4

could you try this solution below?

It means adding quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0]) to this file:

Thanks a lot, It worked after adding quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0]) in hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls

1 Like

@nina-vilela , I’m able to build hef file for 8L architecture,

Also, I’m using rpi5 integrated with Hailo 8L device.

When, I’m using same hef in hailo-rpi5-examples/basic_pipelines/detection.py at main · hailo-ai/hailo-rpi5-examples · GitHub,

I encountered with this issue.

Can you please look into this.

this is due to a version mismatch between the DFC and the HailoRT, please check out the topic below:

You can either upgrade the HailoRT, or downgrade the DFC/MZ and re-compile the model.

@nina-vilela, After downgrading Hailo SW Suit to hailo_ai_sw_suite_2024-04_docker, I’m able to compile hef in rpi5-examples.

But during dectection I have some problem, like In my dataset, I have only four labels i.e (1, 5, 10 and 25).

But It is detecting as motorcycle, bicycle.

Expected is it should detect 1, 5, 10, 25.

But using same model as pt, In my local it is detect properly.

image

But when I use same with hef, It is detecting as motor cycle and bicycle.

image

image

image

image

Am I doing anything wrong?

Nice!
It seems like everything is working well, the only issue seems with the labels file. In the detection example from hailo-rpi5-examples, there’s a --labels-json flag which you can use to configure the labels that will be drawn on the image.

I see now that there’s also a mixup that 1, 10 and 25 are the same class. This could be due to the quantization noise. You can try using a larger calibset and more advanced post-quant algorithms. You can read about it here.

1 Like

It worked @nina-vilela, Thanks.

Why hef file which created out of DFC v3.28 is not compatible with rpi5 run time?

When we are going to receive the latest runtime which supports 3.28.

Hi @bhaskarreddy,

The issue concerns compatibility between the DFC and the HailoRT (firmware and drivers).

If you would like to use the latest DFC, the HailoRT upgrade is already available, you just need to follow its installation instructions. From previous topics, the easiest way on rpi5 is to upgrade using dkms.

I’m using dataset with 5176 images, I’m using Hailo SW Suite for hef conversion.

And using cli to convert to hef file. How can I Use advance post-quant algorithms

cli command I used for hef conversion: hailomz compile --ckpt /local/shared_with_docker/5k_chips.onnx --calib-path /local/shared_with_docker/5k_chips/images/ --yaml /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8n.yaml --classes 4 --hw-arch hailo8l

You can choose the post-quant algorithms with commands passed in the model script.

For yolov8n, the model script is located here.

You can try finetuning or adaround for example. You can read more about post-quant optimization commands here.

okay, Let me try.

Also, one observation @nina-vilela.

During hef conversion, I’m seeing this warning message.

image

I have gpu nvidia GTX Geforce 1650.

Does, this causes any issue for the wrong detection (I mean accuracy of the mode because, Object detection was happening but, detection are not accurate)?

When a GPU is undetected and you don’t explicitly set the quant-post alg in the model script, the only alg being used will be equalization, which doesn’t seem enough for your model.

If you manually set the algorithms, they will run very slowly without the GPU.

I see that you’ve also posted on this topic. Have you ensured that your cuda version is 11.8?