Is it possible to convert the ONNX model with inputs with int32 datatype to .har file?

Hello there,

I have worked to convert PointPillars ONNX file to .har file. However, it seems there are issues with two inputs with int32 datatype. Is the hailo supports the conversion of the model with int32 input datatype? If possible, how should I handle the error during running Python with ClientRunner.translate_onnx_model command?

The error is as follows:

\[info\] Translation started on ONNX model pointpillars
\[info\] Restored ONNX model pointpillars (completion time: 00:00:00.22)
\[info\] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:01.49)
\[info\] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:03.57)
Traceback (most recent call last):
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py”, line 235, in translate_onnx_model
parsing_results = self.\_parse_onnx_model_to_hn(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py”, line 316, in \_parse_onnx_model_to_hn
return self.parse_model_to_hn(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py”, line 359, in parse_model_to_hn
fuser = HailoNNFuser(converter.convert_model(), net_name, converter.end_node_names)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/model_translator/translator.py”, line 82, in convert_model
self.\_create_layers()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/model_translator/edge_nn_translator.py”, line 37, in \_create_layers
self.\_add_input_layers()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/model_translator/edge_nn_translator.py”, line 72, in \_add_input_layers
input_shapes = vertex.get_input_layer_shapes()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py”, line 6311, in get_input_layer_shapes
if len(self.output_format) != rank:
TypeError: object of type ‘NoneType’ has no len()

Thank you,

Hi @ninfueng ,

The error you’re encountering is a known limitation - the Hailo SDK parser does not natively support int32 input datatypes. The TypeError: object of type 'NoneType' has no len() occurs because the translator cannot resolve the input format for non-float inputs, leaving output_format as None.

PointPillars typically has three inputs: the pillar features (float32), and two integer tensors for voxel indices/coordinates and point counts (int32). The Hailo dataflow compiler expects all network inputs to be float32.

How to resolve this:

  1. Modify the ONNX model before translation - Cast the int32 inputs to float32 at the ONNX graph level. You can do this with a small preprocessing script using onnx and onnx-graphsurgeon:
import onnx
import numpy as np
from onnx import helper, TensorProto

model = onnx.load("pointpillars.onnx")

for inp in model.graph.input:
    if inp.type.tensor_type.elem_type == TensorProto.INT32:
        inp.type.tensor_type.elem_type = TensorProto.FLOAT

# Also update any initializers or Cast nodes if needed
onnx.save(model, "pointpillars_float_inputs.onnx")
  1. Insert explicit Cast nodes if downstream operations depend on int32 semantics - add a Cast node right after each modified input to cast back to int32 internally. In many cases the graph will still simplify correctly, but verify with onnx.checker.check_model() and an onnxruntime inference test to ensure numerical equivalence.
  2. Adjust your preprocessing pipeline - since the model inputs are now declared as float32, make sure your calibration dataset and inference pipeline cast the integer tensors to float32 before feeding them into the model.
  3. Then proceed normally:
runner = ClientRunner(hw_arch="hailo8")
hn, npz = runner.translate_onnx_model(
    "pointpillars_float_inputs.onnx",
    net_name="pointpillars",
    start_node_names=...,
    end_node_names=...
)

Depending on the complexity of the PointPillars architecture (especially the Scatter/PillarScatter operation), you may encounter additional unsupported ops during parsing. In that case, consider splitting the model into Hailo-supported and CPU-offloaded subgraphs. If the issue persists after converting inputs to float32 - please share the input/output names and shapes from your ONNX model onnx.loadmodel.graph.input.

Thanks,

1 Like

Thank you for reply @Michael.

I have tried with changing the input type as you suggested. However this still does not work as the ONNX node still expect for operate with int32 operations and this also raises an error with onnx.checker.checkmodel(full_check=True).

As you suggested, I think splitting the model is the way to go.

Thanks,

1 Like