Hailort-postprocess explanation.

I’m seeing hailort-postprocess mentioned in a few places but I don’t know what it means. Looking in the hailort docs there is no mention of “postprocess” or at least from the search returns nothing.

What does this mean?

In this section of code

def extract_detections(
    hailo_output: List[np.ndarray], h: int, w: int, threshold: float = 0.5
) -> Dict[str, np.ndarray]:
    """Extract detections from the HailoRT-postprocess output."""
    xyxy: List[np.ndarray] = []
    confidence: List[float] = []
    class_id: List[int] = []
    num_detections: int = 0


    for i, detections in enumerate(hailo_output):
        if len(detections) == 0:

The hailo_output has clearly already been through postprocessing but I can’t see where this is defined. I’ve checked in the HailoAsyncInference class and can’t see where the postprocess is taking place.

Can someone please explain this to me?

The Hailo architecture was designed to accelerate the compute intensive parts of inference of a neural network model.

At the beginning and end of these models you sometimes have operations that are not supported on the Hailo hardware for different reasons. We call these pre- and post-processing. These operations need to be executed on the host CPU or GPU.

Some of these post-processes are common like NMS in YOLO models and we do provide them as part of the model conversion flow. You can add NMS post-processing using a model script command. They will become part of the HEF file and run on the host CPU.

If you develop your own model with layers not supported by the Hailo device, you will need to implement the post-processing using the programming language of your choice and run it as part of your application.

@rosslote
We wrote a guide on object detection model integration and in that guide we explain the output format in detail. Please see if that helps in understanding: User Guide 2: Running Your First Object Detection Model on a Hailo Device Using DeGirum PySDK. However, the guide does not explain how it is used in HailoAsyncInference.

Do you know if yolo_pose models will ever be supported?

We do have yolov8 pose estimation models in the Model Zoo.

GitHub - Hailo Model Zoo - HAILO8 pose estimation

I mean adding the postprocessing to the hef through the model script. I can see that the pose models are not supported.

Understood, I’m not sure. We need to assess where we allocate development resources.
We do provide post-processing code for these models as part of TAPPAS.

GitHub - Tappas - Postprocesses - Pose estimation

Can post processing be added to the hef for yolo8+ segmentation and pose models? In the C++ examples provided, I see that there is need to do post processing afterwards, but I am still not sure. Thanks!

Are there any plans to implement something like this in the future, similar to how it’s done for YOLOv5 segmentation models? It would be really useful, so I was wondering if it’s possible and if there are any future plans for it.