Over the last few months, we noticed a number of developers reaching out for help compiling YOLO models for Hailo. We provided one-off support, but it wasn’t scalable. So, we built something better.
It’s called the DeGirum Cloud Compiler, and it’s now in early access. It is a browser-based tool that lets you compile YOLO models for Hailo in just a few clicks. No setup, no toolchains, no guesswork. Just upload your PyTorch checkpoint and get back a Hailo-optimized format (.hef), often in under 10 minutes.
Note: You’ll need a free DeGirum AI Hub account to use it and test your models in the browser.
What it supports
YOLOv8 and YOLO11
All model sizes: n, s, m, l, x
Tasks like:
Object detection
Classification
Semantic segmentation
Oriented bounding box detection
Keypoint estimation
Different input resolution during compilation
Optimized C++ postprocessing integrated via DeGirum PySDK
Instant testing in the browser, no download or setup needed
Fast compiles (usually under 10 mins)
Who this is for:
This might be for you if you’re working with YOLO models and want to:
Skip the toolchain wrangling
Go straight from PyTorch checkpoint to Hailo HEF file
Try out different model sizes and resolutions in minutes
See if your model works before deploying, right in the browser
Why early access?
The tool is new, and we want your feedback. If something is confusing or broken, we want to hear about it. If something works well, we want to build on it.
Note: The early access period is free of charge, but may conclude at any time. After this period, access to the Cloud Compiler may transition to a paid subscription model. Participants will be notified in advance of any changes to access terms.
That sounds fantastic. I’ll use it if I get the chance. Just one question. Will the servers retain the files? Or do they just process the conversion and return it to the user? Will the Cloud Compiler work like Degirum’s AI Hub, where the converted files will be available or made public? I compare it to RoboFlow, where the free account makes datasets available to the public. In some cases, I prefer my files or models not to be made public.
@Luiz_Mageste
The compiled models are saved to a model zoo which is private to the workspace of the user. User can delete the model from the model zoo after they download. Only models maintained by DeGirum in degirum workspace are made public. Hope this clarifies your question.
Hello, could you also operate local LLMs with Hailo and would it also be possible to use LLMs for comfyui or other image ai generators? What simple online or local image training programs are available for your own image series that can be used with hailo. I am stuck with the car which autonomously recognizes between bad and good.
if anyone has any ideas or suggestions please let me know.
Hi @TM_TMK
Welcome to the Hailo community. Current compiler is for Hailo8/Hailo8L which are focused on computer vision models. LLMs will be supported in Hailo10 which will be available in the future.
Hey! I have tried it with a YOLOv8S model. It feels like magic, you guys have done a great job!! I think I got the results in less than 5 minutes, with the quantization performed on 100 images. I’ll have a look into the other functionalities of the cloud platform, but this conversion tool alone would justify a subscription on my side. Or a pay-per-conversion token.