License plate detection - How to use?

Hey there!

I am trying to implement the license plate detection. I found in model zoo some license plate detection model but it wasnt for hailo8l and in some thread i found compatible model .

Here is my code how I am trying to run this recognition but it cannot recognize anything at all. I am moving with license plate in all directions but nothing. What can be wrong here?

import numpy as np
import cv2
import threading
from picamera2 import Picamera2
from picamera2.devices import Hailo
from flask import Flask, Response
from libcamera import controls
import libcamera

app = Flask(__name__)

output_frame = None
lock = threading.Lock()

def extract_detections(hailo_output, w, h, class_names, threshold=0.1):
    results = []
    for class_id, detections in enumerate(hailo_output):
        for detection in detections:
            score = detection[4]
            if score >= threshold:
                y0, x0, y1, x1 = detection[:4]
                bbox = (int(x0 * w), int(y0 * h), int(x1 * w), int(y1 * h))
                results.append([class_names[class_id], bbox, score])
    return results

def draw_objects(frame, detections, w, h):
    for class_name, bbox, score in detections:
        x0, y0, x1, y1 = bbox
        label = f"{class_name}: {score:.2f}"
        cv2.rectangle(frame, (x0, y0), (x1, y1), (0, 255, 0), 2)
        cv2.putText(frame, label, (x0, y0 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
    return frame

def detection_thread():
    global output_frame

    with Hailo("./tiny_yolov4_license_plates_new.hef") as hailo:
        model_h, model_w, _ = hailo.get_input_shape()
        video_w, video_h = model_w, model_h  

        with open("coco.txt", 'r', encoding="utf-8") as f:
            class_names = f.read().splitlines()

        with Picamera2() as picam2:
            main = {'size': (video_w, video_h), 'format': 'XRGB8888'}
            lores = {'size': (model_w, model_h), 'format': 'RGB888'}
            config = picam2.create_preview_configuration(main, lores=lores)
            config["transform"] = libcamera.Transform(vflip=1, hflip=1)  
            picam2.configure(config)

            focus_mode = {"AeEnable": 1, "AfMode": controls.AfModeEnum.Manual, "LensPosition": 1}
            picam2.set_controls(focus_mode)

            picam2.start()

            while True:
                lores_frame = picam2.capture_array('lores')
                results = hailo.run(lores_frame)
                detections = extract_detections(results['tiny_yolov4_license_plates/conv19'], model_w, model_h, class_names, 0.1)

                lores_frame = draw_objects(lores_frame, detections, model_w, model_h)

                with lock:
                    output_frame = lores_frame.copy()

def generate_frame():
    global output_frame
    while True:
        with lock:
            if output_frame is None:
                continue
            ret, jpeg = cv2.imencode('.jpg', output_frame)
            frame = jpeg.tobytes()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')

@app.route('/video_feed')
def video_feed():
    return Response(generate_frame(), mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == "__main__":
    threading.Thread(target=detection_thread, daemon=True).start()
    app.run(host='0.0.0.0', port=5000)

Hi,
First, here’s is a good reference for the overall pipeline that is expected to be run:
tappas/apps/h8/gstreamer/general/license_plate_recognition at master · hailo-ai/tappas (github.com)

As you could see there, the pipeline is built from 3 steps:

  1. Detect cars & crop them
  2. From the Car crops, detect licence plates & crop them
  3. Apply OCR on the copped license plates

You basically took the second phase and applied it on input stream from your camera, which it is not trained to work on.

Another issue to tackle is that since there is no publically available dataset for license plates, we’ve trained the precompiled network on Israeli license plates, which may differ in the looks from the one you use.

Hi Nadav,
Thanks for the clarification about the model being trained on Israeli license plates. That could definitely be a part of the issue. However, even though I’m skipping the three-phase pipeline (car → plate → OCR), shouldn’t the model still work to some extent? Maybe not perfectly, but I’d expect some recognition.

I’ve even tried using images of Israeli cars, but I’m still not getting any results. Now I’m wondering if the issue lies in how I’m using the model on the Raspberry Pi. The model is YOLO4, while all the Hailo examples on the Raspberry Pi seem to use YOLO8.

Also, I have one more related question: is it possible to run the OCR model mentioned in the reference on the Raspberry Pi 5?

You can try to use the video that use in our app that is part of TAPPAS.
The tiny_v4 model expects only a car crop, mostly forward facing nothing more.
YOLOv4 model has a lightly different post-processing than the v8, you are right.

Should not be any issue running the OCR on the rpi5.

Hey Nada, I downloaded the corresponding tiny yolov4 model as a weight file from your website and it seems to work fine for inference on my Jetson with random uncropped videos from YouTube. It will detect plenty of license plates. Then when I try to run the hailo8l model on Raspi5 I get zero detection results. My feeling is it has to do with postprocessing. Maybe you can share a working solution as an example.

Where you able to solve this?