I have downloaded a pretty complex open source modal that I want to accelerate. Everything works in PyTorch as expected. First I output the file to ONNX:
def export_to_onnx(self, onnx_path="amodel.onnx"):
dummy_audio = torch.randn(1, 84, 13).cpu()
dummy_video = torch.randn(1, 21, 112, 112).cpu()
# Export the model to ONNX
torch.onnx.export(
self,
(dummy_audio, dummy_video),
onnx_path,
export_params=True,
opset_version=13,
do_constant_folding=True,
input_names=['audio', 'video'],
output_names=['output'],
dynamic_axes={'audio': {0: 'batch_size'}, 'video': {0: 'batch_size'}, 'output': {0: 'batch_size'}}
)
print(f"Model exported to {onnx_path}")
everything works as expected, I even ran the code using onnx runtime so everything should work as expected.
Then I ran the code from the Jupiter notebook:
from hailo_sdk_client import ClientRunner
chosen_hw_arch = "hailo8l"
onnx_model_name = "amodel"
onnx_path = "./amodel.onnx"
runner = ClientRunner(hw_arch=chosen_hw_arch)
hn, npz = runner.translate_onnx_model(
onnx_path,
onnx_model_name,
start_node_names=["audio" ,"video"],
end_node_names=["output"],
net_input_shapes={
"audio": [1, 84, 13],
"video": [1, 21, 112, 112]
},
)
and got this Error:
[info] Translation started on ONNX model amodel
[info] Restored ONNX model amodel (completion time: 00:00:00.17)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:05.74)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:18.68)
hailo_sdk_client.model_translator.exceptions.ParsingWithRecommendationException: Parsing failed. The errors found in the graph are:
UnexpectedNodeError in op /Unsqueeze: Unexpected node /Unsqueeze (Unsqueeze)
UnsupportedShuffleLayerError in op /Reshape: Failed to determine type of layer to create in node /Reshape
UnsupportedShuffleLayerError in op /visualFrontend/Transpose: Failed to determine type of layer to create in node /visualFrontend/Transpose
UnsupportedConv3DError in op /visualFrontend/frontend3D/frontend3D.0/Conv: /visualFrontend/frontend3D/frontend3D.0/Conv is 3D convolution with unsupported padding type None
UnsupportedShuffleLayerError in op /visualFrontend/Transpose_1: Failed to determine type of layer to create in node /visualFrontend/Transpose_1
UnsupportedShuffleLayerError in op /visualFrontend/Reshape: Failed to determine type of layer to create in node /visualFrontend/Reshape
UnsupportedShuffleLayerError in op /visualFrontend/Reshape_1: Failed to determine type of layer to create in node /visualFrontend/Reshape_1
UnsupportedShuffleLayerError in op /Reshape_1: Failed to determine type of layer to create in node /Reshape_1
UnsupportedModelError in op /visualTCN/net/net.0/net/net.4/Mul: In vertex /visualTCN/net/net.0/net/net.4/Mul_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.1/net/net.4/Mul: In vertex /visualTCN/net/net.1/net/net.4/Mul_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.2/net/net.4/Mul: In vertex /visualTCN/net/net.2/net/net.4/Mul_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.0/net/net.4/Add_1: In vertex /visualTCN/net/net.0/net/net.4/Add_1_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.3/net/net.4/Mul: In vertex /visualTCN/net/net.3/net/net.4/Mul_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.1/net/net.4/Add_1: In vertex /visualTCN/net/net.1/net/net.4/Add_1_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.4/net/net.4/Mul: In vertex /visualTCN/net/net.4/net/net.4/Mul_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.2/net/net.4/Add_1: In vertex /visualTCN/net/net.2/net/net.4/Add_1_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.3/net/net.4/Add_1: In vertex /visualTCN/net/net.3/net/net.4/Add_1_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedModelError in op /visualTCN/net/net.4/net/net.4/Add_1: In vertex /visualTCN/net/net.4/net/net.4/Add_1_input the contstant value shape (512, 1, 1) must be broadcastable to the output shape [1, 1, 21]
UnsupportedShuffleLayerError in op /crossA2V/self_attn/Transpose_2: Failed to determine type of layer to create in node /crossA2V/self_attn/Transpose_2
UnexpectedNodeError in op /crossA2V/self_attn/Squeeze_1: Unexpected node /crossA2V/self_attn/Squeeze_1 (Squeeze)
UnsupportedShuffleLayerError in op /crossV2A/self_attn/Transpose_2: Failed to determine type of layer to create in node /crossV2A/self_attn/Transpose_2
UnexpectedNodeError in op /crossV2A/self_attn/Squeeze_1: Unexpected node /crossV2A/self_attn/Squeeze_1 (Squeeze)
UnsupportedShuffleLayerError in op /selfAV/self_attn/Transpose_2: Failed to determine type of layer to create in node /selfAV/self_attn/Transpose_2
UnexpectedNodeError in op /selfAV/self_attn/Squeeze_1: Unexpected node /selfAV/self_attn/Squeeze_1 (Squeeze)
Please try to parse the model again, using these start node names: /Div, /audioEncoder/conv1/Conv
Please try to parse the model again, using these end node names: /crossA2V/Transpose, /Div_1, /crossA2V/self_attn/Unsqueeze_5, /crossV2A/self_attn/Mul_2
So, I did as it recommended I’ve changed the code according to the suggestion:
hn, npz = runner.translate_onnx_model(
onnx_path,
onnx_model_name,
start_node_names=["/audioEncoder/conv1/Conv", "/Div"],
end_node_names=["/crossV2A/self_attn/Mul_2", "/Div_1", "/crossA2V/Transpose", "/crossA2V/self_attn/Unsqueeze_5"],
net_input_shapes={
"/audioEncoder/conv1/Conv": [1, 84, 13],
"/Div": [1, 21, 112, 112]
},
)
but now I get an error that looks like in the onnxruntime:
[info] Translation started on ONNX model amodel
[info] Restored ONNX model amodel (completion time: 00:00:00.16)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:06.52)
[warning] ONNX shape inference failed: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input '/Gather_output_0' is not a graph input, initializer, or output of a previous node.
I have also tried running the ONNX model through the onnx simplifier but got the same errors.
I do not have much expericance with ML, I do know that the model does have conv3D, MaxPool3d, Squeeze but becuase the onnx model was compiled successfully, what can’t I convert it to Hailo??
Is this modal not supported? if so when will it be supported?
Thanks for the help