We attempted to follow the Apps-Infra developer guide and use HailoRT Python to build a pipeline manually.
However:
-
Example code from the docs references attributes like
InferVStreams.input_vstreamswhich are not present in the 4.23 bindings. -
The tensor objects returned by
.infer()do not match the example shapes. -
Without an official detection demo showing expected vstream names, tensor layout, and postprocess path, it’s unclear how to correctly decode YOLOv5m output for this HEF.
Put simply: we lack a working YOLOv5m Apps-Infra example or reference implementation to follow.
What We Need…
Could Hailo please clarify one of the following?
A. TAPPAS / GStreamer Path
Is there an official runtime JSON for:
yolov5m_wo_spp.hef (Hailo-8L)
compatible with:
/usr/lib/.../libyolo_hailortpp_post.so
Something like:
yolov5m_wo_spp_tappas.json
If so:
-
Where should it be located in the SW Suite or Apps-Infra repository?
-
If not included in this release, can Hailo provide one (even minimal) to enable the standard detection pipeline?
B. Apps-Infra / SDK Python Path
If the recommended RPi workflow for 4.23/Trixie is now Apps-Infra (instead of TAPPAS):
-
Is there an official YOLOv5m detection example using Apps-Infra 25.10?
-
What is the correct way to:
-
Access YOLOv5m vstreams on Hailo-8L
-
Run infer in Python
-
Apply the proper postprocess (NMS/decoding) using Model Zoo configs
-
-
Is there a sample similar to the
hailo-detectionCLI for YOLOv8, but targeting YOLOv5m?
Why This Blocks Our Project
We have a full traffic-monitoring application running (camera, radar, low-power mode, session logging, UI, calibration, etc.). The only missing piece is a working inference layer.
Right now:
-
The HEF loads correctly.
-
All other subsystems are functional.
-
But detections = 0 because we do not have the correct postprocess configuration or runtime example for YOLOv5m_wo_spp.