Problem With Model Optimization

@Nadav I’m using DFC with linux and below commands to convert onnx to hef and getting the same issue on yolov8n. can you please tell me where can I add those quantization_params?

  1. hailo parser onnx yolov8n.onnx --net-name yolov8n --har-path yolov8n.har --start-node-names images --hw-arch hailo8l

  2. hailo optimize yolov8n.har --hw-arch hailo8l --use-random-calib-set calib_set.npy --output-har-path yolov8n_quantized_model.har

  3. hailo compiler yolov8n_quantized_model.har --hw-arch hailo8l --output-dir .

Hi @Vivek_Malvi,
You’re using --use-random-calib-set, this will always give you wrong detections. Please use real images, and the correct pre-processing code.

i tried that too. I gave around 1000 images in calib set but that didn’t affected much

Are you using a trained ONNX? Are the native results looks good?

Yes, @Nadav. I’m using a trained ONNX. the ONNX native results are good. there are no issues with it. The issue arises after hef conversion.

1 Like

I have encountered a similar problem too. May I ask if it has been solved? Thank you very much.

ValueError: Exception encountered when calling HailoPostprocess.call().

Tried to convert ‘input’ to a tensor and failed. Error: None values not supported.

Arguments received by HailoPostprocess.call():
• inputs=[‘tf.Tensor(shape=(1, 80, 80, 64), dtype=float32)’, ‘tf.Tensor(shape=(1, 80, 80, 13), dtype=float32)’, ‘tf.Tensor(shape=(1, 40, 40, 64), dtype=float32)’, ‘tf.Tensor(shape=(1, 40, 40, 13), dtype=float32)’, ‘tf.Tensor(shape=(1, 20, 20, 64), dtype=float32)’, ‘tf.Tensor(shape=(1, 20, 20, 13), dtype=float32)’]
• training=False
• kwargs={‘encoding_tensors’: ‘None’, ‘cache_config’: ‘None’}
Calibration: 0%| | 0/64 [00:16<?, ?entries/s]

I keep getting the same error, after putting that line in .alls file.

I’m trying to optimize a yolov11s based model, could that be the difference?

Thank you @Nadav

Hi @Simao_Branco,
I think that it’s not needed in this case. For example, you can check the official alls for YOLOv11n, that doesn’t make use of this:
hailo_model_zoo/cfg/alls/generic/yolov11n.alls

Have you tried that? The command that you’re adding was there to tackle a specific issue that might not be relevant for your case.

The problem was that the error occured before adding anything in.

Running with the compiler directly with the model zoo and that .alls file worked just fine, but I was trying to validate the possibility of compiling a model that is not present in the model zoo, by doing the parsing first, then optimizing and then compiling. And that error kept occuring.

I can confirm that this line solved my problem with yolov11l as well:
quantization_param([conv120, conv143, conv165], force_range_out=[0.0, 1.0])

1 Like