converting tiny_yolov4_license_plates model for hailo-8l

I used this giude :hailo_model_zoo/hailo_models/license_plate_detection/docs/TRAINING_GUIDE.rst at master · hailo-ai/hailo_model_zoo · GitHub
to convert tiny_yolov4_license_plates model from Hailo
I did it with Docker on Linux machine.

Now I need to add the NMS layer
I used this guide:

But I get an error:
ValueError: ‘yolo_v4’ is not a valid NMSMetaArchitectures

What I would like to know is :

  1. if this this file : hailo_model_zoo/hailo_model_zoo/cfg/networks/tiny_yolov4_license_plates.yaml at master · hailo-ai/hailo_model_zoo · GitHub
    meta_arch is yolo_v4 So why am I even getting this error?
    I used this : model_script_commands = [
    “normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n”,
    “resize_input1= resize(resize_shapes=[461,461])\n”,
    “nms_postprocess(meta_arch=yolo_v4, engine=cpu, nms_scores_th=0.2, nms_iou_th=0.4)\n”,
    ]

2.Why its so complicated to take Hailo model and use it on hailo-8l instead of hailo-8(yolov4_license_plates.hef file for hailo-8 and I am trying now to convert it)

  1. How can I be sure that Im providing the right end nodes to the script ? how can I be absolutely sure where to find the nodes in netron.app

I can upload the onnx file if needed
And if this model not supported in some way how come that its is possible to get it from the ALPR page of Hailo ?

Hey @Andrey,

1. Why does meta_arch=yolo_v4 cause a ValueError?

The nms_postprocess() script doesn’t support meta_arch=yolo_v4.
Valid values are:

  • yolov5, yolov5_seg, yolox, yolo8, ssd, centernet

So even though the YAML might use yolo_v4, you’ll need to handle postprocessing differently — outside of the model script.


2. Why is it complicated to use this model on Hailo-8L vs Hailo-8?

A hailo8l model can run on a hailo8 but the oppsite is not because the hailo8 has more contexts in it so it can handle a model built for less contexts but not the oppsite , so if you are compilign for hailo8 , just add
hw_arch=‘hailo8l’


3. How do I find end_node_names with Netron?

Open your .onnx in Netron and:

  1. Look for the last Conv or Concat layers before detection heads.
  2. Confirm they output feature maps shaped like [batch, anchors * (classes + 5), h, w]
  3. Use those as your end_node_names.

Feel free to share your ONNX if you want me to verify them for you.


4. Is this model supported via ALPR?

Yes — but the postprocessing is done via a custom .so (libyolo_post.so) in TAPPAS, not nms_postprocess().

If you’re building the flow yourself, you’ll need to either:

  • Use that .so in your pipeline (e.g. with hailofilter), or
  • Restructure the model to fit a supported meta_arch (like yolov5).

We’re also working on a new LPR model that runs cleanly on both Hailo-8 and Hailo-8L — and will be included in the Raspberry Pi examples soon.

1 Like