object of type 'NoneType' has no len() when parsing ONNX model with Hailo parser

Hello,

I’m encountering a persistent issue when trying to parse an ONNX model using the hailo parser command in the Hailo AI Software Suite (DFC 3.31.0, HailoRT 4.21.0). The error is:

TypeError: object of type 'NoneType' has no len()

This occurs during the get_input_layer_shapes function, and I also see a warning: [warning] ONNX shape inference failed: list index out of range.
Model Details
Model: A simple mathematical model (ci_calculator_op11.onnx) implementing the formula:

cri = sqrt(((sqrt((sum_acc * delta_d) / distance))^2) + ((sqrt((sum_gyro * delta_d) / distance))^4))

Inputs: 4 scalar tensors (sum_acc, sum_gyro, delta_d, distance), shape [1] (previously tried ).

Output: 1 scalar tensor (cri), shape [1].

Operators: Mul, Div, Sqrt, Add (originally included Pow, replaced with Mul to avoid compatibility issues).

ONNX Opset: 11.

Generated: Exported from PyTorch using the following code:

python


import torch
import torch.nn as nn

class CRICalculator(nn.Module):
    def __init__(self):
        super(CRICalculator, self).__init__()

    def forward(self, sum_acc, sum_gyro, delta_d, distance):
        acc_term = sum_acc * delta_d
        acc_term = acc_term / distance
        acc_term = torch.sqrt(acc_term)
        acc_term = acc_term * acc_term

        gyro_term = sum_gyro * delta_d
        gyro_term = gyro_term / distance
        gyro_term = torch.sqrt(gyro_term)
        gyro_term_sq = gyro_term * gyro_term
        gyro_term = gyro_term_sq * gyro_term_sq

        sum_terms = acc_term + gyro_term
        cri = torch.sqrt(sum_terms)
        return cri

model = CRICalculator()
model.eval()

dummy_input = (
    torch.tensor([1.0], dtype=torch.float32),
    torch.tensor([2.0], dtype=torch.float32),
    torch.tensor([0.1], dtype=torch.float32),
    torch.tensor([10.0], dtype=torch.float32)
)

torch.onnx.export(
    model,
    dummy_input,
    "ci_calculator_op11.onnx",
    opset_version=11,
    input_names=["sum_acc", "sum_gyro", "delta_d", "distance"],
    output_names=["cri"],
    dynamic_axes=None
)

Environment
Hailo AI Software Suite: DFC 3.31.0, HailoRT 4.21.0.

OS: Linux, Kernel 5.4.0-200-generic.

Hardware: Intel Xeon E7-4880 v2, 120 cores, 251GB RAM.

Execution: Inside the Hailo Docker container.

Command and Error
Command:
bash

hailo parser onnx ci_calculator_op11.onnx --har ci_calculator_simplified.har

Error (full traceback available if needed):

[info] Translation started on ONNX model ci_calculator_op11
[info] Restored ONNX model ci_calculator_op11 (completion time: 00:00:00.02)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.04)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.06)
Traceback (most recent call last):

File “…/onnx_graph.py”, line 6311, in get_input_layer_shapes
if len(self.output_format) != rank:
TypeError: object of type ‘NoneType’ has no len()

Questions
Is there a known issue with the Hailo parser (DFC 3.31.0) handling scalar or [1]-shaped inputs for mathematical models?

Are there specific configurations or workarounds for parsing models with Sqrt or scalar tensors?

Could this be a bug in the parser, or am I missing a step in the ONNX export or parsing process?

Any insights or suggestions would be greatly appreciated! Thank you for your support.
Best regards,
Giuseppe

@Giuseppe_Meloni

Hailo explicitly does not support scalar shapes. If you look in the function that generates the exception you will find this:

        default_format_by_rank = {
            2: [Dims.BATCH, Dims.CHANNELS],
            3: [Dims.BATCH, Dims.WIDTH, Dims.CHANNELS],
            4: [Dims.BATCH, Dims.CHANNELS, Dims.HEIGHT, Dims.WIDTH],
            5: [Dims.BATCH, Dims.CHANNELS, Dims.GROUPS, Dims.HEIGHT, Dims.WIDTH],
        }
        self.output_format = self._graph.net_input_format.get(lookup_name, default_format_by_rank.get(rank))
        if len(self.output_format) != rank:

A tensor of rank 1 does not exist in default_format_by_rank.

The easiest solution is to make your inputs rank 2 by adding a batch dimension. All of your operators are element-wise so it should even work with batch.

So simply change the dummy_input to this:

dummy_input = (
    torch.tensor([[1.0]], dtype=torch.float32),
    torch.tensor([[2.0]], dtype=torch.float32),
    torch.tensor([[0.1]], dtype=torch.float32),
    torch.tensor([[10.0]], dtype=torch.float32)
)