Usability of RPI5-Hailo and extensibility question?

hello we bought some HAILO accelerator i have some general question i want to ask them just to map our roadmap

  • we have already deployed jetson devices / cuda (nvidiacard rtx) as edge devices
  • we have pipeline that train and collect using YOLO

we use torch , cuda , jetson-pack

q1) am sure we have to convert our torch or onnx generated model weight to HEF , if you could provide us with DFC so we can automate this incase the user have hailo devices so in our pipeline we need to generate precompiled weight so it can be pulled from our model registry

q2) is it possible to contribute to yolo so that when passing device==“HAILO” the ultralytics itself will render hail execution ?

Welcome to the Hailo Community!

The Hailo Dataflow Compiler is available in the Developer Zone to registered users.

If you want to use the Hailo Dataflow Compiler on your platform to allow users to generate models in HEF format, I will need to get you in contact with our sales and legal team to setup the terms under which you can do that. Please let me know if you are interested in this option.

The conversion of a model to HEF format requires the use of our Hailo Dataflow Compiler.
We have extensive support for the Yolo models e.g., by providing pre-build HEF files, support in the Hailo Model Zoo, retraining docker and use in many of our examples.

GitHub - Hailo Model Zoo - Public Models

GitHub - Hailo Model Zoo - Retrain on custom dataset

GitHub - Tappas - Detection