2.Why its so complicated to take Hailo model and use it on hailo-8l instead of hailo-8(yolov4_license_plates.hef file for hailo-8 and I am trying now to convert it)
How can I be sure that Im providing the right end nodes to the script ? how can I be absolutely sure where to find the nodes in netron.app
I can upload the onnx file if needed
And if this model not supported in some way how come that its is possible to get it from the ALPR page of Hailo ?
The nms_postprocess() script doesn’t support meta_arch=yolo_v4. Valid values are:
yolov5, yolov5_seg, yolox, yolo8, ssd, centernet
So even though the YAML might use yolo_v4, you’ll need to handle postprocessing differently — outside of the model script.
2. Why is it complicated to use this model on Hailo-8L vs Hailo-8?
A hailo8l model can run on a hailo8 but the oppsite is not because the hailo8 has more contexts in it so it can handle a model built for less contexts but not the oppsite , so if you are compilign for hailo8 , just add
hw_arch=‘hailo8l’
3. How do I find end_node_names with Netron?
Open your .onnx in Netron and:
Look for the last Conv or Concat layers before detection heads.
Confirm they output feature maps shaped like [batch, anchors * (classes + 5), h, w]
Use those as your end_node_names.
Feel free to share your ONNX if you want me to verify them for you.
4. Is this model supported via ALPR?
Yes — but the postprocessing is done via a custom .so (libyolo_post.so) in TAPPAS, not nms_postprocess().
If you’re building the flow yourself, you’ll need to either:
Use that .so in your pipeline (e.g. with hailofilter), or
Restructure the model to fit a supported meta_arch (like yolov5).
We’re also working on a new LPR model that runs cleanly on both Hailo-8 and Hailo-8L — and will be included in the Raspberry Pi examples soon.