YOLO11n-pose converting

Hello, trying to convert yolo11n-pose to hef but have an error

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

Call arguments received by layer "yolov8_nms_postprocess" (type HailoPostprocess):
  • inputs=['tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)', 'tf.Tensor(shape=(None, 80, 80, 51), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 51), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 51), dtype=float32)']
  • training=False
  • kwargs=<class 'inspect._empty'>

end_node I use:

end_node_names = ['/model.23/cv2.2/cv2.2.2/Conv',
                  '/model.23/cv3.2/cv3.2.2/Conv',
                  '/model.23/cv4.2/cv4.2.2/Conv',
                  '/model.23/cv2.1/cv2.1.2/Conv',
                  '/model.23/cv3.1/cv3.1.2/Conv',
                  '/model.23/cv4.1/cv4.1.2/Conv',
                  '/model.23/cv2.0/cv2.0.2/Conv',
                  '/model.23/cv3.0/cv3.0.2/Conv',
                  '/model.23/cv4.0/cv4.0.2/Conv']

alls script I use:

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
quantization_param([conv54, conv68, conv86], force_range_out=[0.0, 1.0])
change_output_activation(conv54, sigmoid)
change_output_activation(conv68, sigmoid)
change_output_activation(conv86, sigmoid)
pre_quantization_optimization(equalization, policy=disabled)

quantization_param(output_layer3, precision_mode=a16_w16)
quantization_param(output_layer6, precision_mode=a16_w16)
quantization_param(output_layer9, precision_mode=a16_w16)
model_optimization_config(globals, gpu_policy=auto, multiproc_policy=allowed)
model_optimization_config(calibration, batch_size=64, calibset_size=1024)
model_optimization_flavor(optimization_level=1, compression_level=0, batch_size=5)

performance_param(compiler_optimization_level=2)
nms_postprocess("models/nms_cfg/yolo11n-pose-simpl_nms.json", meta_arch=yolov8, engine=cpu)

Please, recommend what am I supposed to do. I can send you simplified onnx or not optimized har for more details.

@Nadav @omria @shashi can you help or at least explain why steps I do are wrong? Thank you!

@ighgul

You can try our cloud compiler to see if it can successfully compile: Early Access to DeGirum Cloud Compiler

@shashi

thanks, it was really helpful with standard model, but if I have custom classes? is it any way to convert model with different class quantity and mapping, or models based on yolo11n-pose with some architectural changes? what I supposed to do in these cases?

Hi @ighgul

Models with different class quantity and mapping still compile with our cloud compiler. In fact, a majority of our users compile custom models. Architectural changes: we cannot guarantee 100% but if you submit a job and it fails, we look at the logs and see if some minor tweaks can help.