Personal Opinion about Hailo Zoo

I think it’s great that Hailo provides many pre-trained .hef models in the Zoo for people to use. However, the technical problem I’ve encountered is that there is no clear documentation on how to actually use these .hef files.

For example, I bought an RPi 5 and a Hailo-8 device, downloaded segformer_b0_bn.hef and other .hef files, but I still don’t know how to make them work.

I even tried asking ChatGPT, Grok, and Gemini to generate Python scripts that could run inference on the RPi 5 using .hef models, but after many attempts, I still couldn’t get it working.

Therefore, I would like to suggest that Hailo provide some tutorials—such as sample Python code—that clearly show users how to use .hef files for inference.

P.S. I’ve seen others raise the same question, and Hailo’s response was basically “please check the examples.” Honestly, I find that kind of reply unhelpful—it feels like no real answer at all.

Hi @Chin_Hsu,

Glad to hear that you’re making your first steps with Hailo and Raspberry-pi!

You have a few ways of using the HEF:

  1. From within the Model-Zoo itself, check the evaluation and visualization options.

  2. Using the hailo-apps GitHub for some bre-baked pipelines using the HEFs from the MZ.

    Note, that in the hilo-apps examples, we do not have example for all the networks that are in the MZ.

Thank you for the sharing. Currently, there are tutorials on depth, detection, detection_simple, face_recognition, instance_segmentation, pose_estimation. It would be perfect if you could also add semantic segmentation.