I’m seeing hailort-postprocess mentioned in a few places but I don’t know what it means. Looking in the hailort docs there is no mention of “postprocess” or at least from the search returns nothing.
def extract_detections(
hailo_output: List[np.ndarray], h: int, w: int, threshold: float = 0.5
) -> Dict[str, np.ndarray]:
"""Extract detections from the HailoRT-postprocess output."""
xyxy: List[np.ndarray] = []
confidence: List[float] = []
class_id: List[int] = []
num_detections: int = 0
for i, detections in enumerate(hailo_output):
if len(detections) == 0:
The hailo_output has clearly already been through postprocessing but I can’t see where this is defined. I’ve checked in the HailoAsyncInference class and can’t see where the postprocess is taking place.
The Hailo architecture was designed to accelerate the compute intensive parts of inference of a neural network model.
At the beginning and end of these models you sometimes have operations that are not supported on the Hailo hardware for different reasons. We call these pre- and post-processing. These operations need to be executed on the host CPU or GPU.
Some of these post-processes are common like NMS in YOLO models and we do provide them as part of the model conversion flow. You can add NMS post-processing using a model script command. They will become part of the HEF file and run on the host CPU.
If you develop your own model with layers not supported by the Hailo device, you will need to implement the post-processing using the programming language of your choice and run it as part of your application.
Understood, I’m not sure. We need to assess where we allocate development resources.
We do provide post-processing code for these models as part of TAPPAS.
Can post processing be added to the hef for yolo8+ segmentation and pose models? In the C++ examples provided, I see that there is need to do post processing afterwards, but I am still not sure. Thanks!
Are there any plans to implement something like this in the future, similar to how it’s done for YOLOv5 segmentation models? It would be really useful, so I was wondering if it’s possible and if there are any future plans for it.