Apps-Infra / HailoRT Python example for YOLOv5m_wo_spp appears to be missing

We attempted to follow the Apps-Infra developer guide and use HailoRT Python to build a pipeline manually.

However:

  • Example code from the docs references attributes like InferVStreams.input_vstreams which are not present in the 4.23 bindings.

  • The tensor objects returned by .infer() do not match the example shapes.

  • Without an official detection demo showing expected vstream names, tensor layout, and postprocess path, it’s unclear how to correctly decode YOLOv5m output for this HEF.

Put simply: we lack a working YOLOv5m Apps-Infra example or reference implementation to follow.


What We Need

Could Hailo please clarify one of the following?

A. TAPPAS / GStreamer Path

Is there an official runtime JSON for:

yolov5m_wo_spp.hef (Hailo-8L)

compatible with:

/usr/lib/.../libyolo_hailortpp_post.so

Something like:

yolov5m_wo_spp_tappas.json

If so:

  • Where should it be located in the SW Suite or Apps-Infra repository?

  • If not included in this release, can Hailo provide one (even minimal) to enable the standard detection pipeline?


B. Apps-Infra / SDK Python Path

If the recommended RPi workflow for 4.23/Trixie is now Apps-Infra (instead of TAPPAS):

  1. Is there an official YOLOv5m detection example using Apps-Infra 25.10?

  2. What is the correct way to:

    • Access YOLOv5m vstreams on Hailo-8L

    • Run infer in Python

    • Apply the proper postprocess (NMS/decoding) using Model Zoo configs

  3. Is there a sample similar to the hailo-detection CLI for YOLOv8, but targeting YOLOv5m?


Why This Blocks Our Project

We have a full traffic-monitoring application running (camera, radar, low-power mode, session logging, UI, calibration, etc.). The only missing piece is a working inference layer.

Right now:

  • The HEF loads correctly.

  • All other subsystems are functional.

  • But detections = 0 because we do not have the correct postprocess configuration or runtime example for YOLOv5m_wo_spp.

Hey @user337,

You can try running: ( inside the virtual environment)

hailo-download-resources --arch hailo8 --group detection
hailo-detect --hef-path resources/models/hailo8/yolov5m_wo_spp.hef --labels-json yolov5m_wo_spp.json

This should work out of the box since yolov5m_wo_spp uses the same NMS/post-processing as the other models.

Give it a shot and let me know what you get.
If it fails, please share the logs or any errors you see — the hailort.log would be super helpful too!

I think I’m going to admit defeat for now. Until Hailo has a more streamlined workflow, I think I have to abandon the Hailo hardware on the Raspberry Pi.

I’ve sunk too many hours into this with really nothing to show for it.

I’m going to attempt my project with the Raspberry Pi AI Camera. I think it may actually better suited for my purpose anyway. I’ll probably revisit the Hailo hardware in several months. It just seems like ecosystem isn’t fully developed with advanced hobbyists in mind. After attempting my project with every possible combination of bookworm and older Hailo versions and Trixie and the newest releases, different versions of python, various yolo versions and gStreamer, infra… I have been unable to find a combination that didn’t have a roadblock of some kind. I’m sure there is a magic combination, but I can’t find it.

I need more complete documented examples that have clear implementation guidance. A properly documented python library, similar to the type of documentation that Adafruit provides with their various hats and modules.

I didn’t realize when I got the Hailo based hardware that it wasn’t really for advanced hobbyists, but rather professional engineers.