onnx to hef converion issues

Hello all,
I converted custom yolov5m onnx model which is trained on a single class into hef using dfc. The steps I followed were -

1. hailo parser onnx custom_yolov5m.onnx --start-node-names images --hw-arch hailo8l --end-node-names “output0” “output1” “output2”

this was the first step my onnx model was having 3 output layers , this created a custom_yolov5m.har

2. hailo optimize custom_yolov5m.har --hw-arch hailo8l --calib-set-path calib_data --model-script custom_yolov5m.alls

in this step I have used 1000+ images for calib_data and created a .alls file similar to one that was present here - hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov5m.alls at master · hailo-ai/hailo_model_zoo · GitHub , the only change I did here was the node names I got from my .har file, this created a optimized.har file

3.hailo compiler optimized.har --hw-arch hailo8l --model-script custom_yolov5m.alls

this created a .hef file , this hef is giving wrong detections , any idea what I did wrong here ?

Here I have attached my .alls file

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
nms_postprocess("/models/custom.json", yolov5, engine=cpu)
change_output_activation(sigmoid)
model_optimization_config(calibration, batch_size=4, calibset_size=1025)
quantization_param(conv45, precision_mode=a8_w4)
quantization_param(conv46, precision_mode=a8_w4)
quantization_param(conv51, precision_mode=a8_w4)
quantization_param(conv53, precision_mode=a8_w4)
quantization_param(conv80, precision_mode=a8_w4)
post_quantization_optimization(finetune, policy=enabled, learning_rate=0.0001, epochs=4, dataset_size=1025, loss_factors=[1.0, 1.0, 1.0], loss_types=[l2rel, l2rel, l2rel], loss_layer_names=[conv65,conv74,conv82])

Hey,

Check this .alls file template for Yolov5m. Hope that helps.