Hi @_2312882685 Welcome to Hailo’s community.
Your issue—poor detection performance after converting a self-trained YOLOv8s model (with 2 classes) to HEF for Hailo-8—is a known challenge, especially with custom YOLOv8 models trained on a small number of classes or limited data. This often results from quantization issues during model optimization, which can cause some output nodes to be “almost nullified,” leading to poor accuracy on the Hailo device.
Key recommendations:
1. Adjust Quantization Parameters in the Model Script
For YOLOv8 models with few classes, it’s recommended to explicitly set the output range for the final convolution layers before NMS. In your .alls script, replace the change_output_activation lines with a quantization_param command as follows:
Your required output layer numbers may differ
normalization1 = normalization([127.5, 127.5, 127.5], [128.0, 128.0, 128.0])
quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])
model_optimization_flavor(optimization_level=2)
nms_postprocess("../../postprocess_config/yolov8s_nms_config.json", meta_arch=yolov8, engine=cpu)
This change helps maintain a proper dynamic range for the outputs, which is especially important for models with a low number of classes. Multiple users have reported that this adjustment significantly improves detection accuracy after conversion to HEF, as confirmed by Hailo support and community members see this guidance and here.
2. Update the NMS Config
Make sure your yolov8s_nms_config.json reflects the correct number of classes (in your case, "classes": 2). If it is set to 80 (the COCO default), NMS may not work as expected for your custom model.
3. Calibration Dataset
Ensure your calibration dataset is representative of your real data and is properly normalized. Poor calibration data can also lead to suboptimal quantization and poor runtime accuracy.
4. Additional Notes
-
The rest of your conversion pipeline and YAML configuration appear correct.
-
If you continue to see poor results, try increasing the size and diversity of your calibration set, and double-check that your input preprocessing (normalization) matches what the model expects.