Hailo-8, minium object size detection

Hi All

I am using Hailo-8, with RP, and following the examples

Issues · hailo-ai/hailo-rpi5-examples

What is the minimum the object needs to cover on the screen to be detected? When I place an object further away from my table, it does not detect it.

This is important as I would like the camera to be able to detect an object further away as it approaches

Hey @user254 ,

There isn’t a magic “percentage of screen” that guarantees your object will be detected. It really comes down to which model you’re using, what input size it expects, and how your images get processed beforehand.

Since you’re working with our examples, you’re probably running either YOLOv6n or YOLOv8s on 640×640 input images. Here’s the reality: objects smaller than about 20-24 pixels on their shortest side are basically a coin flip for detection. Things get much more reliable once they hit around 32+ pixels in height or width. Move your object too far away so it shrinks below that threshold, and you’ll start missing detections.

Quick fixes you can try right now:

Adjust your camera optics - This is your best bet. Use a lens with a longer focal length (narrower field of view) to effectively “zoom in” on distant objects. This makes them appear larger in pixels without hurting your frame rate.

Lower your confidence threshold - If your script lets you adjust the confidence setting (usually with something like --conf), try dropping it from 0.5 to 0.3. Just be ready for more false positives, and consider using the tracking features to smooth out results between frames.

Rule of thumb: Aim for your target object to be at least 32 pixels on its shortest side in the final image that goes to the network. If it’s smaller, you need to either increase your input resolution, get a narrower lens, or move your camera closer.

@omria I am using the Halio Module, out of the box with my RP5, how can I check if I am using YOLOv6n or YOLOv8s?

Also, my camera is 1920 x 1080, so the 640×640, does mean Hailo down converts the entire frame to 640×640 or is it segmented the original frame into smaller 640×640 sections?

At the moment I am using Raspberry Pi 5, with your Hailo module, and also running your detection example . So in reference to dropping from 0.5 to 0.3, can this be done in this script?

Yes will aim to use a variation of lense, but wanted to keep it wider and detect any person or animal from the sides . So I guess using higher resolution camera covering the same focal area is the best way .