Converting Yolov11 model trained using Custom Dataset into .HEF

Hi Hailo Community,
I’ve been training a YOLOv11 object detection model on a custom dataset and successfully converted my trained .pt model into .hef using the Hailo Model Compiler. However, I’m encountering issues where objects are being misclassified—for example, a human is sometimes detected as a vehicle—and the overall accuracy is significantly lower than expected. I need guidance on properly configuring the .alls, .json, and calibration datasets to improve both label mapping and detection accuracy.

The link to the .pt file is : here

My custom dataset consists of five classes: Building, Vehicle, Human, Trees, and UAV (nc: 5). When defining the .json config file for the Hailo Model Compiler, I want to ensure that the labels are mapped correctly and that the input shapes are properly set. I referenced a YouTube video(Video)on Hailo model conversion and followed the steps outlined there, but I still encountered these issues. Could you provide best practices for configuring these settings to maintain the accuracy of my original .pt model? Additionally, are there specific parameters in the .json file that influence how objects are classified?

This is the labels.json i have created for the configs:

{
    "detection_threshold": 0.5,
    "max_boxes":200,
    "labels": [
      "background",
      "Building", 
      "Vehicle", 
      "Human", 
      "Trees", 
      "UAV"
    ]
}

I also need guidance on generating a proper calibration dataset for Hailo’s Dataflow Compiler. Should I be selecting a subset of images that cover different lighting conditions, object sizes, and occlusions? Should the dataset include full images with labels, or should I manually extract object crops for better calibration? Understanding the best approach here could significantly improve the model’s performance after quantization.

Lastly, I’d appreciate any recommendations for improving the accuracy of object detection on Hailo hardware. Are there specific post-training optimizations I should consider, such as quantization-aware training, dataset augmentation, or fine-tuning methods? Could the misclassification issue be related to improper quantization settings or missing label mappings? If Hailo provides tools or techniques to refine accuracy after compilation, I’d love to learn more about them. Also, are there any best practices to reduce false positives and false negatives during inference?

I’d really appreciate any guidance or documentation references that could help resolve these issues.

Thank you

1 Like

Hey @mehtaaryan207,

Welcome to the Hailo Community!

The mislabeling issue is occurring because of the “background” class. Unless you specifically trained your model with “background” as a label, you should remove it to fix the problem.

Here are key optimization steps for YOLOv11 on Hailo:

  1. Calibration Dataset:

    • Use 1024+ diverse images
    • Include varied lighting and scenarios
    • Use full images with bounding boxes (no manual cropping)
  2. Recommended Settings:

model_script_commands = [
    "model_optimization_flavor(optimization_level=2, compression_level=0)",
    "quantization_param(output_layer, precision_mode=a16_w16)",
    "pre_quantization_optimization(activation_clipping, layers={*}, mode=percentile, clipping_values=[0.01, 99.99])"
]

Let me know if you need help implementing these changes!