Unstable output by detection.py

Hi,
I am new to to the task and I am asking for a right direction to dig into. I am using RaspberryPi5 + Usb Camera + Hailo8 Hat for object detection. Using Object Detection example with yolov11n_h8l, but the other HEFs show similar behavior. My command is
python basic_pipelines/detection.py --input usb --hef-path ~/hailo-rpi5-examples/resources/yolov11n_h8l.hef
When I run it I can see a window with the WebCam vision and objects detected. The problem is, even if the camera and the objects stand still, the output sometimes does detect the objects while sometimes it doesn’t. For example, there is cellphone and a can in front of the cam. Doing 30fps, many of them (frames) will show these objects detected, while some others will be showing the cellphone only or the can only or none of them. Even if they stand still. I understand that it might be related to the confidence, but this happens even when the confidence is relatively high. I am wondering what can be done to fix/improve it. Is there a parameter to pass to detection.py to make it more robust in expense of processing time? Or, maybe, the only way is to retrain the network for my specific purposes, doing which I would like to avoid.
Alex

Welcome to the Hailo Community!

I recommend checking out the Model Explorer in the Developer Zone. It showcases all available models from the Model Zoo, along with their accuracy metrics. You’ll notice that several models outperform the YOLOv11n model in terms of accuracy.

Hailo.ai - Model Explorer

Additionally, training models using images captured from your actual hardware under real lighting conditions can significantly improve accuracy.

Keep in mind that some datasets - such as COCO - include many contextual images of older phones, often being held by people. As a result, newer phone models or phones placed on surfaces like tables may not be detected effectively. Using relevant and representative training data for your use case can make a difference.