Hi Hailo Community,
I’ve been training a YOLOv11 object detection model on a custom dataset and successfully converted my trained .pt
model into .hef
using the Hailo Model Compiler. However, I’m encountering issues where objects are being misclassified—for example, a human is sometimes detected as a vehicle—and the overall accuracy is significantly lower than expected. I need guidance on properly configuring the .alls, .json, and calibration datasets to improve both label mapping and detection accuracy.
The link to the .pt file is : here
My custom dataset consists of five classes: Building, Vehicle, Human, Trees, and UAV (nc: 5
). When defining the .json
config file for the Hailo Model Compiler, I want to ensure that the labels are mapped correctly and that the input shapes are properly set. I referenced a YouTube video(Video)on Hailo model conversion and followed the steps outlined there, but I still encountered these issues. Could you provide best practices for configuring these settings to maintain the accuracy of my original .pt
model? Additionally, are there specific parameters in the .json
file that influence how objects are classified?
This is the labels.json i have created for the configs:
{
"detection_threshold": 0.5,
"max_boxes":200,
"labels": [
"background",
"Building",
"Vehicle",
"Human",
"Trees",
"UAV"
]
}
I also need guidance on generating a proper calibration dataset for Hailo’s Dataflow Compiler. Should I be selecting a subset of images that cover different lighting conditions, object sizes, and occlusions? Should the dataset include full images with labels, or should I manually extract object crops for better calibration? Understanding the best approach here could significantly improve the model’s performance after quantization.
Lastly, I’d appreciate any recommendations for improving the accuracy of object detection on Hailo hardware. Are there specific post-training optimizations I should consider, such as quantization-aware training, dataset augmentation, or fine-tuning methods? Could the misclassification issue be related to improper quantization settings or missing label mappings? If Hailo provides tools or techniques to refine accuracy after compilation, I’d love to learn more about them. Also, are there any best practices to reduce false positives and false negatives during inference?
I’d really appreciate any guidance or documentation references that could help resolve these issues.
Thank you