File "/home/hailo/.local/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_translator.py", line 385, in _layer_callback_from_vertex
if vertex.is_null_operation() and not is_flattened_global_maxpool:
File "/home/hailo/.local/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py", line 5307, in is_null_operation
or (self.op == "ReduceMean" and self.is_null_reduce_mean())
File "/home/hailo/.local/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py", line 5342, in is_null_reduce_mean
axes = self._convert_axes_to_nhwc(axes_info)
File "/home/hailo/.local/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py", line 2250, in _convert_axes_to_nhwc
return [nchw_to_nhwc_axis_mapping[self.input_format[axis]] for axis in axes]
File "/home/hailo/.local/lib/python3.10/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py", line 2250, in <listcomp>
return [nchw_to_nhwc_axis_mapping[self.input_format[axis]] for axis in axes]
TypeError: 'NoneType' object is not subscriptable
This happens during a ReduceMean operation. Basically, the SDK expects self.input_format to be defined, but it’s None, so it crashes when trying to map axes.
Why This Happens
Your model has a ReduceMean node working over a shape like [1, 26, 192] → [1, 26, 1].
The SDK tries to evaluate if this is a “null-op” by checking the axes.
If shape inference is off or your model is over-simplified, then input_format doesn’t get set — and boom, crash.
How to Fix It
1. Enable shape inference
If you’re using something like this:
disable_shape_inference=True
Change it to:
disable_shape_inference=False
This should let the SDK set the input_format properly.
I am seeing this exact error, but none of the suggested fixes work for me.
In my case, I have two ReduceMean operators following each other. Both of them have the ‘axes‘ attribute set (1 and -2 respectively), and I have set the flag ‘disable_shape_inference=False’. I get the error regardless of any simplification of the net.
Your crash is happening because the translator gets confused with the tensor layout when you chain two ReduceMean ops with axes 1 and -2.
Here’s what I’d try:
First option - normalize your axes and use keepdims:
Convert that -2 to its positive equivalent
Set keepdims=1 on both ReduceMean nodes to keep the tensor dimensions consistent
Throw in a Squeeze at the end if you need to drop those reduced dims
Better approach - just fuse them: Since you’re doing back-to-back reductions anyway, why not just combine the axes into one ReduceMean? Way cleaner and sidesteps the whole layout issue.
Also worth checking:
Make sure your axes are node attributes (int64s) instead of input tensors - the translator handles those way better
Try sticking with positive NHWC indices when you can (like {1,2} for spatial dims)
If all else fails, you could do Transpose → ReduceMean → Transpose but that’s probably overkill
The fused approach usually does the trick.
Let me know how it goes!