Hi everyone,
I’m running a Hailo-8 (PCIe) setup for automated photo processing at high-contrast venues (resorts, pools, dark aquariums). The goal is highly accurate instance segmentation of humans and animals on 1920x1440 photos.
I am currently using the official yolov8m_seg.hef (Medium) from the Model Zoo. I’ve heavily optimized my Python post-processing (custom NMS, 15% relative size filters to drop background people, and strict “Anchor Core” logic to kill disconnected background islands).
However, I’ve hit a hard physical limit with the ‘M’ model’s native mask precision due to the 640x640 inference scale and 8-bit quantization.
The Problem: I am stuck in a thresholding trap. Because the AI generates a blurry probability gradient (a “fat marker” outline) at 640x640, upscaling that mask to my 12MP original creates a massive margin of error at the edges.
-
If I set the threshold too low, the mask bleeds into the background, catching pool chairs and hotel balconies that sit inside that blurry edge.
-
If I set the threshold too high, it clips the fine details of the subjects (fingers, edges of faces, or macaw tail feathers).
I need a heavier model with deeper contextual awareness to generate a sharper, tighter native mask before the post-processing threshold is even applied.
Does anyone have a pre-compiled .hef file for YOLOv8x-seg or YOLOv9x-seg configured for the Hailo-8?
A few notes on my setup:
-
Hardware: Hailo-8 (PCIe).
-
Input size: Standard 640x640 is fine (I handle letterboxing and coordinate mapping in OpenCV).
-
FPS is not an issue: This is for batch photo processing, not real-time video. I have a generous 10 to 15-second processing window per image, so the slower inference time of the massive ‘X’ models (and the SRAM strain) is completely acceptable for my use case.
If anyone has an exported .hef they’d be willing to share, or knows where i can find it, I would massively appreciate it!
Thanks in advance.