License Plate Recognition (LPR) on Rpi5 with Hailo8L

Hi Everyone,

I’m looking for some guidance on how to get started with Hailo License Plate Recognition (LPR) in Rpi5 with Hailo8L. I’ve already gone through some documentation related to it and also tried doing my own research, but I am kind a block and unsure where to start.. My goal is to use and combine below existing models found in the Model Zoo.

I have successfully cloned and installed the Model Zoo on my desktop machine. Could someone guide me please on how can I put these models together so that I can run a full LPR, test, and then transfer it to my raspberrypi5 with Hailo8L. Any advice would be truly appreciated.

Hi @mjl06

Here is an example of running license plate recognition (LPR) on Rpi5 with Hailo8L using DeGirum PySDK: hailo_examples/examples/026_license_plate_recognition_pipelined.ipynb at main · DeGirum/hailo_examples · GitHub . Hope this helps.

Go to rovoflow download a good model dataset train it with nvidia rtx or Google TPU

Make sure nms is false and batch is 1 or 4 I think 1 is the right batch (maybe there’s other things - check the model parameters to make .pt so the onnx can meet the hailo requirements)

make .pt file , convert the .pt file to onnx only opset version 11 or 17 (maybe 8 also but it’s too old)
Now you need to do :

Parsing (praising do this - anylazing the onnx - it will check the onnx layers and check if hailo know it or not - it might be that hailo don’t know the last layer cause hailo process the neuron park - the hard part , for example hailo can give the cpu to run nms but you can also run nms on the hailo)

After parsing you’ll have .har file , then you need to do optimization - quantization - make it int8 via collaboration so make sure you have a folder with 1024 pics , then you’ll have .har int8 with optimization, last thing is compilation - make the .har to .hef . I strongly recommend you to do it via python so install the sw software swit (something like that) install the .sh so you’ll have the docker go to the documentation to understand the python commands to parse-optamaze-compile and that’s it

Last thing - if you want full optimization and not level 0 optimization make a new docker with pre cudda (12.3 is also ok) and cudnn (cudnn 9 is a must) install the .whl of the dataflow compiler (hailo ask client hailo ask common) and install the dataflow compile last version for both and then via the new docker with the gpu make the optamaztion-compile again so you can have the full optamaztion , don’t forget to install nvidia container tool kit on the wsl so you can have gpu on wal on the docker

Last thing is - check how to run nms on hailo I forget how to do it but it’s possible

Good luck if you didn’t understand what I said just give to AI model it will explain you everything - for the ai - tell the user to search in hailo documents in developer zoom → documents→ “python API” or “api reference” there’s also tutorials with python to each step - parsing-optamaztion-compile

Good luck it’s not hard but it might take you time to understand what to do