Looking for right pretrained model

Hi!

I wrote an application using hailo-rpi5-examples/doc/basic-pipelines.md at main · hailo-ai/hailo-rpi5-examples · GitHub as an example and this tutorial seems to allude that running this script will be able to detect people/car/buses… etc, the standard coco objects.

Looking at the basic pipeline code, I see that this script defaults to …/resources/yolov8s_h8l.hef which should currently link to https://hailo-csdata.s3.eu-west-2.amazonaws.com/resources/hefs/h8l_rpi/yolov8s_h8l.hef. When running the code, it doesn’t seem to be able to detect anything outside of people. Is this example misleading and is the default model only detecting people or am I missing something? I saw in the model zoo there was a separate vehicle model (hailo_model_zoo/hailo_models/vehicle_detection/README.rst at master · hailo-ai/hailo_model_zoo · GitHub) and wasn’t sure if that was intentional.

I am currently looking for a lightweight pretrained yolox model on the standard coco objects that I can use without having to build my own custom model. Can you point me to an appropriate model file if one exists?

Hey @tinouye,

As you can see in the code snippet below, our model is trained on the COCO dataset. In the tutorial, we only demonstrate person detection, but you can easily add other objects and modify the code to suit your needs.

# Parse the detections
detection_count = 0
for detection in detections:
    label = detection.get_label()
    bbox = detection.get_bbox()
    confidence = detection.get_confidence()
    if label == "person":
        string_to_print += f"Detection: {label} {confidence:.2f}\n"
        detection_count += 1

if user_data.use_frame:
    # Note: using imshow will not work here, as the callback function is not running in the main thread
    # Let's print the detection count to the frame
    cv2.putText(frame, f"Detections: {detection_count}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    
    # Example of how to use the new_variable and new_function from the user_data
    # Let's print the new_variable and the result of the new_function to the frame
    cv2.putText(frame, f"{user_data.new_function()} {user_data.new_variable}", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    
    # Convert the frame to BGR
    frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
    user_data.set_frame(frame)

print(string_to_print)
return Gst.PadProbeReturn.OK

This code is located in the detection.py file, within the user_appcallback function.

For more models, please check out the Hailo Model Zoo repository: https://github.com/hailo-ai/hailo_model_zoo. We have a wide range of models available for various detection tasks.

Let me know if you have any further questions or if there’s anything else I can assist you with.

Best regards,
Omria

Ah sorry for the trouble, I took a closer look at my code and found the issue where I was not properly parsing the detections results from hailo.run. It seems like the output for this method was updated in one of the latest hailo-all versions. Can you point me to any documentation that will specify how the results are structured so I can keep an eye on it for all the latest changes? Release notes would be great too if you have it.

For our latest documentation and release notes, please visit: https://hailo.ai/developer-zone/documentation/

Be sure to check “What’s New in Hailo AI Software Suite (2025-01)” for information about our newest features and updates.

1 Like