Hello Hailo Developer Community,
This is my first post here, and I am excited to join this amazing community! I am currently working on a car license plate detection project and recently encountered an issue I need help understanding.
Here’s a bit about me: I have over 18 years of experience in the IT field, but I am completely new to AI on edge devices. This is my first project using the Hailo platform, and I am learning as I go. I would greatly appreciate your expertise and guidance!
Project Overview:
- Setup:
I have successfully trained a YOLOv8n model to detect license plates and exported it to ONNX format. Afterward, I compiled it into an HEF file using the Hailo Dataflow Compiler. - Pipeline:
I am running the compiled HEF model on a Raspberry Pi 5 with the Hailo HAT+ accelerator. My pipeline processes video input, preprocesses the frames, runs inference, and annotates the detected objects.
3.Note:* I have slightly modified the pipeline from the Hailo-rpi5-examples repository on GitHub.
The Issue:
When I compare the original frames to the annotated frames (where bounding boxes and labels are added), I notice significant degradation in image quality. This degradation also affects cropped license plate images extracted using the bounding box coordinates.
Specifically:
- Original Image (IM01_O): Proper resolution and color representation.
- Annotated Image (IM01_withAnno): Colors appear washed out, and the resolution is visibly reduced.
Note: I had to blur the license plate in the original image for obvious privacy reasons.
Post Title:
Bad Image Quality When Cropping Car Plate
Post Content:
Hello Hailo Developer Community,
This is my first post here, and I am excited to join this amazing community! I am currently working on a car license plate detection project and recently encountered an issue I need help understanding.
Here’s a bit about me: I have over 18 years of experience in the IT field, but I am completely new to AI on edge devices. This is my first project using the Hailo platform, and I am learning as I go. I would greatly appreciate your expertise and guidance!
Project Overview:
- Setup:
I have successfully trained a YOLOv8n model to detect license plates and exported it to ONNX format. Afterward, I compiled it into an HEF file using the Hailo Dataflow Compiler. - Pipeline:
I am running the compiled HEF model on a Raspberry Pi 5 with the Hailo HAT+ accelerator. My pipeline processes video input, preprocesses the frames, runs inference, and annotates the detected objects.
3.Note:* I have slightly modified the pipeline from the Hailo-rpi5-examples repository on GitHub.
The Issue:
When I compare the original frames to the annotated frames (where bounding boxes and labels are added), I notice significant degradation in image quality. This degradation also affects cropped license plate images extracted using the bounding box coordinates.
Specifically:
- Original Image (IM01_O): Proper resolution and color representation.
- Annotated Image (IM01_withAnno): Colors appear washed out, and the resolution is visibly reduced.
- Cropped License Plate Image: The cropped regions are blurry and unclear, making them unsuitable for the next steps (e.g., OCR).
Note: I had to blur the license plate in the original image for obvious privacy reasons.
Steps Taken to Debug:
- Preprocessed Input:
I inspected the preprocessed frames (before inference) and saved them. The preprocessed images already exhibit degraded quality, indicating that the issue might occur during preprocessing or resizing. - Raw Model Output:
I verified that the bounding box predictions are accurate. The issue does not seem to originate from the model’s output. - Annotated Output:
The annotated frames confirm that the degradation happens early in the pipeline, potentially during preprocessing.
What I Need Help With:
- Understanding the Problem: Why does the image quality degrade during preprocessing and annotation?
- Suggestions: How can I retain the original image quality while cropping the detected license plates?
- Optimization: Are there ways to improve the GStreamer pipeline or the preprocessing steps to resolve this issue?
I am attaching examples of the images for reference:
- IM01_O: Original image (with blurred license plate for privacy).
- IM01_withAnno: Annotated image with bounding boxes. (don’t mind wrong label )
Best regards,
Jalal
An IT veteran stepping into the world of AI on edge