Are the layer names in your model different from the ones in the pre-trained version of the model? If not, passing the -start-node-names and -end-node-names is unnecessary as it’s already part of the yaml file.
Thanks a lot, It worked after adding quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0]) in hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
Nice!
It seems like everything is working well, the only issue seems with the labels file. In the detection example from hailo-rpi5-examples, there’s a --labels-json flag which you can use to configure the labels that will be drawn on the image.
I see now that there’s also a mixup that 1, 10 and 25 are the same class. This could be due to the quantization noise. You can try using a larger calibset and more advanced post-quant algorithms. You can read about it here.
The issue concerns compatibility between the DFC and the HailoRT (firmware and drivers).
If you would like to use the latest DFC, the HailoRT upgrade is already available, you just need to follow its installation instructions. From previous topics, the easiest way on rpi5 is to upgrade using dkms.
During hef conversion, I’m seeing this warning message.
I have gpu nvidia GTX Geforce 1650.
Does, this causes any issue for the wrong detection (I mean accuracy of the mode because, Object detection was happening but, detection are not accurate)?
When a GPU is undetected and you don’t explicitly set the quant-post alg in the model script, the only alg being used will be equalization, which doesn’t seem enough for your model.
If you manually set the algorithms, they will run very slowly without the GPU.
I see that you’ve also posted on this topic. Have you ensured that your cuda version is 11.8?