Bad Image Quality When Cropping Car Plate

Hello Hailo Developer Community,

This is my first post here, and I am excited to join this amazing community! I am currently working on a car license plate detection project and recently encountered an issue I need help understanding.

Here’s a bit about me: I have over 18 years of experience in the IT field, but I am completely new to AI on edge devices. This is my first project using the Hailo platform, and I am learning as I go. I would greatly appreciate your expertise and guidance!

Project Overview:

  1. Setup:
    I have successfully trained a YOLOv8n model to detect license plates and exported it to ONNX format. Afterward, I compiled it into an HEF file using the Hailo Dataflow Compiler.
  2. Pipeline:
    I am running the compiled HEF model on a Raspberry Pi 5 with the Hailo HAT+ accelerator. My pipeline processes video input, preprocesses the frames, runs inference, and annotates the detected objects.
    3.Note:* I have slightly modified the pipeline from the Hailo-rpi5-examples repository on GitHub.

The Issue:

When I compare the original frames to the annotated frames (where bounding boxes and labels are added), I notice significant degradation in image quality. This degradation also affects cropped license plate images extracted using the bounding box coordinates.

Specifically:

  • Original Image (IM01_O): Proper resolution and color representation.
  • Annotated Image (IM01_withAnno): Colors appear washed out, and the resolution is visibly reduced.


Note: I had to blur the license plate in the original image for obvious privacy reasons. :blush:

Post Title:

Bad Image Quality When Cropping Car Plate


Post Content:

Hello Hailo Developer Community,

This is my first post here, and I am excited to join this amazing community! I am currently working on a car license plate detection project and recently encountered an issue I need help understanding.

Here’s a bit about me: I have over 18 years of experience in the IT field, but I am completely new to AI on edge devices. This is my first project using the Hailo platform, and I am learning as I go. I would greatly appreciate your expertise and guidance!


Project Overview:

  1. Setup:
    I have successfully trained a YOLOv8n model to detect license plates and exported it to ONNX format. Afterward, I compiled it into an HEF file using the Hailo Dataflow Compiler.
  2. Pipeline:
    I am running the compiled HEF model on a Raspberry Pi 5 with the Hailo HAT+ accelerator. My pipeline processes video input, preprocesses the frames, runs inference, and annotates the detected objects.
    3.Note:* I have slightly modified the pipeline from the Hailo-rpi5-examples repository on GitHub.

The Issue:

When I compare the original frames to the annotated frames (where bounding boxes and labels are added), I notice significant degradation in image quality. This degradation also affects cropped license plate images extracted using the bounding box coordinates.

Specifically:

  • Original Image (IM01_O): Proper resolution and color representation.
  • Annotated Image (IM01_withAnno): Colors appear washed out, and the resolution is visibly reduced.
  • Cropped License Plate Image: The cropped regions are blurry and unclear, making them unsuitable for the next steps (e.g., OCR).

Note: I had to blur the license plate in the original image for obvious privacy reasons. :blush:


Steps Taken to Debug:

  1. Preprocessed Input:
    I inspected the preprocessed frames (before inference) and saved them. The preprocessed images already exhibit degraded quality, indicating that the issue might occur during preprocessing or resizing.
  2. Raw Model Output:
    I verified that the bounding box predictions are accurate. The issue does not seem to originate from the model’s output.
  3. Annotated Output:
    The annotated frames confirm that the degradation happens early in the pipeline, potentially during preprocessing.

What I Need Help With:

  • Understanding the Problem: Why does the image quality degrade during preprocessing and annotation?
  • Suggestions: How can I retain the original image quality while cropping the detected license plates?
  • Optimization: Are there ways to improve the GStreamer pipeline or the preprocessing steps to resolve this issue?

I am attaching examples of the images for reference:

  • IM01_O: Original image (with blurred license plate for privacy).
  • IM01_withAnno: Annotated image with bounding boxes. (don’t mind wrong label :sweat_smile:)

Best regards,
Jalal
An IT veteran stepping into the world of AI on edge

@jalal.belghazi
From the information you provided, it seems like there are two issues:

  1. The annotated image size is 640x640 which is the input size of the model. Your original image is of much higher resolution 900x1600 (widthxheight). Typically, the input image is resized to fit the model input size (you can see black bands on the sides to resize to a square without changing aspect ratio). If you scale your bbox coordinates to original image size, you can extract the license plate from original image.
  2. The annotated image probably has colors inverted: BGR instead of RGB (grass appears blue as opposed to green). OpenCV provides arrays in BGR format and proper care must be taken to visualize with proper colorspace.
    Please let me know if these suggestions help.

Hi @jalal.belghazi, and welcome to the community!

The image quality issues you’re encountering (washed-out colors and resolution changes) are commonly caused by mismatches in the image processing pipeline. Since you’re working with YOLOv8n, here are areas you should check:

  1. Model Input Requirements:
  • Ensure your input image matches the model’s specifications, including:
    • The color space format (RGB vs BGR)
    • Any normalization requirements (e.g., 0-255 vs 0-1 range)
  1. Input Dimensions:
  • Verify if the original image is resized to match the model’s expected input size.
  • Use the hailortcli parse-hef command to check your model’s exact input/output dimensions. For more details on this command, refer to the following wiki: Hailo RidgeRun Wiki - Parse-HEF command
  1. Post-processing:
  • If normalization is applied during preprocessing, make sure it’s properly reversed before displaying/saving the image.
  • Verify that the color space is correctly converted back for display or saving.
  • Resize the image back to its original size if it was altered during preprocessing.
  • Ensure annotations are applied at the appropriate stage in the pipeline.

You can also debug your application by saving the following intermediate images:

  1. The original image
  2. The image after preprocessing (before inference)
  3. The final annotated image (post-inference)

Let me know how it goes, and feel free to reach out if you need further assistance!

Best regards,

Oscar Mendez
Embedded SW Engineer at RidgeRun
Contact us: [email protected]
Developers wiki: Hailo AI Platform Guide
Website: www.ridgerun.ai