1D form is not supported in layer ... of type EWAddLayer

I converted a NN that uses Conv1D operations from keras to tflite. When trying to parse the tflite file I get the following:

UnsupportedModelError in op functional_1_1/functional_5/activation_7_1/Relu;functional_1_1/functional_5/add_1_2/Add: 1D form is not supported in layer functional_1_1/functional_5/activation_7_1/Relu;functional_1_1/functional_5/add_1_2/Add of type EWAddLayer.

Any ideas? Thanks!

I tried both transforming my keras model to tflite and onnx and both failed with different errors.

The onnx one had problems with squeezing/unsqueezing operations that were not after dense or conv layers. Those operations happened because the Conv1D op transformed into Conv2D op and the graph had to add or remove the one extra dimension. Conv1D has 3D tensors as input and Conv2D has 4D tensors. I confirmed that with the Netron app.

hailo_sdk_client.model_translator.exceptions.ParsingWithRecommendationException: Parsing failed. The errors found in the graph are:
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_18_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_18_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_143_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_143_1/convolution/ExpandDims (Unsqueeze)
 UnsupportedShuffleLayerError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_12_1/MaxPool1d__929: Failed to determine type of layer to create in node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_12_1/MaxPool1d__929
 UnsupportedShuffleLayerError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_128_1/convolution__345: Failed to determine type of layer to create in node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_128_1/convolution__345
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_63_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_63_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_9_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_9_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_80_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_80_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_15_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_15_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_112_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_112_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_21_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_21_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_159_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_159_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_27_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_27_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/conv1d_31_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/conv1d_31_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_3_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_3_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_37_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_37_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_7_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_7_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_69_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_69_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_13_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_13_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_101_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_101_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_19_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_19_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_133_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_133_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_25_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_25_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/conv1d_5_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/conv1d_5_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_1_2/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_1_2/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/global_average_pooling1d_1_1/Mean_Squeeze__2822: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/global_average_pooling1d_1_1/Mean_Squeeze__2822 (Squeeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/global_average_pooling1d_2_1/Mean_Squeeze__2824: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/global_average_pooling1d_2_1/Mean_Squeeze__2824 (Squeeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/global_average_pooling1d_3_1/Mean_Squeeze__2826: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/global_average_pooling1d_3_1/Mean_Squeeze__2826 (Squeeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/global_average_pooling1d_4_1/Mean_Squeeze__2818: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/global_average_pooling1d_4_1/Mean_Squeeze__2818 (Squeeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/global_average_pooling1d_1/Mean_Squeeze__2820: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/global_average_pooling1d_1/Mean_Squeeze__2820 (Squeeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_53_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_53_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_10_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_10_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_85_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_85_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_16_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_16_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_117_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_117_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_22_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_22_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_149_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_149_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_28_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_28_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/conv1d_21_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/conv1d_21_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_4_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_4_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_42_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_42_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_8_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_8_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_74_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_74_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_14_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_14_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_106_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_106_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_20_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_20_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_138_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_138_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_26_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_26_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/conv1d_10_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/conv1d_10_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_2_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_2_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_58_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/conv1d_58_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_11_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_1_1/max_pooling1d_11_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_90_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/conv1d_90_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_17_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_17_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_122_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/conv1d_122_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_23_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_3_1/max_pooling1d_23_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_154_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_154_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_29_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_4_1/max_pooling1d_29_1/MaxPool1d/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/conv1d_26_1/convolution/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/conv1d_26_1/convolution/ExpandDims (Unsqueeze)
 UnexpectedNodeError in op StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_5_1/MaxPool1d/ExpandDims: Unexpected node StatefulPartitionedCall/functional_1_1/functional_5/max_pooling1d_5_1/MaxPool1d/ExpandDims (Unsqueeze)
Please try to parse the model again, using these start node names: StatefulPartitionedCall/functional_1_1/functional_2_1/max_pooling1d_12_1/MaxPool1d__929, StatefulPartitionedCall/functional_1_1/functional_4_1/conv1d_128_1/convolution__345

As a workaround I migrated all my Conv1D to Conv2D in my network. So far, so good.

Hi @engarlanded_boa,
Thank you for the update. So the status so far is that you are able to create the ONNX\tflite model, but unable to parse it using the Hailo Parser?

Regards,

Hi @Omer ,

You’re exactly right. Both the tflite and onnx formats failed to parse with the Hailo Parser.

Hi @engarlanded_boa,
The Hailo Parser doesn’t support parsing all layer. The Hailo8 is a neural accelerator, and therefore supports (as a rule of thumb) mostly neural operators.
In addition, the Hailo Parser supports layer in shape (N,C) or (N,H,W,C), meaning (excluding the batch dimension) that it supports 1-D layers and 3-D layers.

If your model holds layers that are not specified as supported layers in the Dataflow Compiler’s user guide, you need to specify the start\end nodes for the Parser, meaning that the preprocessing\postprocessing model would not run on the Hailo chip but would run on the host.

What are the error you are getting when trying to parse?

Regards,

Makes sense. The errors were the ones from above. I unlocked myself by converting 1D operations into 2D with one dimension so tflite won’t introduce additional layers. So far it works and I managed to quantize the model.

However, the compile stage seems to hang. It’s been 20 minutes or so and the logs say the following:

[2024-08-15 14:15:18.684] [default] [info] Adding an output layer after fc1
[2024-08-15 14:15:18.684] [default] [info] Adding an output layer after fc4
[2024-08-15 14:15:18.685] [default] [info] Adding an output layer after fc5
[2024-08-15 14:15:18.685] [default] [info] Loading network parameters
[2024-08-15 14:15:18.938] [default] [info] Output order different size
[2024-08-15 14:15:18.939] [default] [info] Starting Hailo allocation and compilation flow
[2024-08-15 14:15:18.953] [default] [info] Model name: model
[2024-08-15 14:23:02.395] [default] [info] Trying to solve in single context
[2024-08-15 14:23:02.487] [default] [info] Single context flow failed: Resources presolve failed: lcus=(570/64), continuing in multi context
[2024-08-15 14:30:42.559] [default] [info] Finding the best partition to contexts...
[2024-08-15 14:30:48.294] [default] [info] Iteration #1 - Contexts: 10, Changed: context_0, Fast FPS: 403.973 (best: 0), Failed on: Automri finished with too many resources on context_0
[2024-08-15 14:30:49.146] [default] [info] Iteration #2 - Contexts: 10, Changed: context_0, Fast FPS: 403,973 (best: 0), Failed on: Automri finished with too many resources on context_0
[2024-08-15 14:30:49.987] [default] [info] Iteration #3 - Contexts: 10, Changed: context_0, Fast FPS: 404,829 (best: 0), Failed on: Automri finished with too many resources on context_0
[2024-08-15 14:30:50.906] [default] [info] Iteration #4 - Contexts: 10, Changed: context_0, Fast FPS: 404,703 (best: 0), Failed on: Automri finished with too many resources on context_0
....
....
[2024-08-15 14:59:54.228] [default] [info] Iteration #1.034 - Contexts: 14, Changed: context_12, Fast FPS: 332,055 (best: 0), Failed on: Automri finished with too many resources on context_12
[2024-08-15 14:59:56.291] [default] [info] Iteration #1.035 - Contexts: 14, Changed: context_12, Fast FPS: 331,561 (best: 0), Failed on: Automri finished with too many resources on context_12
[2024-08-15 14:59:58.344] [default] [info] Iteration #1.036 - Contexts: 14, Changed: context_12, Fast FPS: 331,561 (best: 0), Failed on: Automri finished with too many resources on context_12

Getting something new:

[2024-08-15 15:12:39.439] [default] [info] Iteration #1.201 - Contexts: 16, Changed: context_14, Fast FPS: 317,411 (best: 0), Failed on: Too many inputs/outputs for context_8, try to reduce number of inputs or outputs
Number of DDRs: 0
Number of inputs: 16
Number of outputs: 11

[2024-08-15 15:12:51.337] [default] [info] Iteration #1.202 - Contexts: 16, Changed: context_8, Fast FPS: 317,411 (best: 0), Failed on: Too many inputs/outputs for context_8, try to reduce number of inputs or outputs
Number of DDRs: 0
Number of inputs: 13
Number of outputs: 11

[2024-08-15 15:14:02.797] [default] [info] Iteration #1.203 - Contexts: 16, Changed: context_8, Fast FPS: 317,333 (best: 0), Failed on: Too many inputs/outputs for context_8, try to reduce number of inputs or outputs
Number of DDRs: 0
Number of inputs: 10
Number of outputs: 11

[2024-08-15 15:14:05.013] [default] [info] Iteration #1.204 - Contexts: 16, Changed: context_8, Fast FPS: 317,619 (best: 0), Failed on: Automri finished with too many resources on context_9
[2024-08-15 15:14:06.968] [default] [info] Iteration #1.205 - Contexts: 16, Changed: context_9, Fast FPS: 331,419 (best: 0), Failed on: Automri finished with too many resources on context_10
[2024-08-15 15:14:07.419] [default] [info] Iteration #1.206 - Contexts: 16, Changed: context_10, Fast FPS: 331,419 (best: 0), Failed on: Automri finished with too many resources on context_11
[2024-08-15 15:14:07.852] [default] [info] Iteration #1.207 - Contexts: 16, Changed: context_11, Fast FPS: 331,419 (best: 0), Failed on: Automri finished with too many resources on context_12
[2024-08-15 15:14:11.463] [default] [info] Iteration #1.208 - Contexts: 16, Changed: context_12, Fast FPS: 325,632 (best: 0), Failed on: Automri finished with too many resources on context_12
[2024-08-15 15:14:13.323] [default] [info] Iteration #1.209 - Contexts: 16, Changed: context_12, Fast FPS: 318,912 (best: 0), Failed on: Automri finished with too many resources on context_12
[2024-08-15 15:14:15.100] [default] [info] Iteration #1.210 - Contexts: 16, Changed: context_12, Fast FPS: 318,912 (best: 0), Failed on: Automri finished with too many resources on context_13
[2024-08-15 15:14:18.399] [default] [info] Iteration #1.211 - Contexts: 16, Changed: context_13, Fast FPS: 318,912 (best: 0), Failed on: Automri finished with too many resources on context_13
[2024-08-15 15:14:21.640] [default] [info] Iteration #1.212 - Contexts: 16, Changed: context_13, Fast FPS: 319,726 (best: 0), Failed on: Automri finished with too many resources on context_14
[2024-08-15 15:14:24.788] [default] [info] Iteration #1.213 - Contexts: 16, Changed: context_14, Fast FPS: 318,74 (best: 0), Failed on: Automri finished with too many resources on context_14
[2024-08-15 15:14:51.999] [default] [info] Iteration #1.214 - Contexts: 16, Changed: context_14, Fast FPS: 322,078 (best: 0), Failed on: Too many lcus (59 > 51,2), Failed on: Too many lcus (57 > 51,2), Failed on: Too many lcus (56 > 51,2)
[2024-08-15 15:14:54.085] [default] [info] Iteration #1.215 - Contexts: 16, Changed: context_9, Fast FPS: 326,486 (best: 0), Failed on: Automri finished with too many resources on context_10
[2024-08-15 15:14:54.422] [default] [info] Iteration #1.216 - Contexts: 16, Changed: context_10, Fast FPS: 327,943 (best: 0), Failed on: Automri finished with too many resources on context_10
[2024-08-15 15:14:55.147] [default] [info] Iteration #1.217 - Contexts: 16, Changed: context_10, Fast FPS: 327,943 (best: 0), Failed on: Automri finished with too many resources on context_10
[2024-08-15 15:14:55.519] [default] [info] Iteration #1.218 - Contexts: 16, Changed: context_10, Fast FPS: 327,943 (best: 0), Failed on: Automri finished with too many resources on context_11
[2024-08-15 15:14:56.289] [default] [info] Iteration #1.219 - Contexts: 16, Changed: context_11, Fast FPS: 327,943 (best: 0)
[2024-08-15 15:14:57.921] [default] [info] Iteration #1.220 - Contexts: 16, Changed: context_13, Fast FPS: 327,943 (best: 0), Failed on: Automri finished with too many resources on context_14
[2024-08-15 15:15:28.008] [default] [info] Iteration #1.221 - Contexts: 16, Changed: context_14, Fast FPS: 327,201 (best: 0), Failed on: Too many lcus (60 > 51,2), Failed on: Too many lcus (52 > 51,2), Failed on: Too many lcus (59 > 51,2), Failed on: Too many lcus (57 > 51,2)

Hi @engarlanded_boa,
The compilation process can take a long time (several hours sometimes), especially if we’re talking about a big model. So I don’t think this is a “hang”, just the CPU working on compiling the model. In any case, there’s a default timeout defined in case it takes much too long and doesn’t find a solution for the allocation.

Regards,

I wasn’t sure what to expect with the compilation times, so I really appreciate you pitching in. Thanks!

Hei @Omer,

The network compiled!

However, I still have some problems when trying to do inference on the hardware itself.
When I try to follow the tutorial, it throws an exception when creating an infer model.

with VDevice(params) as vdevice:

    # Create an infer model from an HEF:
    infer_model = vdevice.create_infer_model('combined_2d_quant.hef')

Throws:

[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INTERNAL_FAILURE(8)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INTERNAL_FAILURE(8)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INTERNAL_FAILURE(8)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INTERNAL_FAILURE(8)

I know that some of those softmax output layers were mapped to run on CPU. Maybe it has something to do with that?

I ran the following:

import hailo_platform
hef = hailo_platform.pyhailort.pyhailort.HEF('combined_2d_quant.hef')
hef.get_sorted_output_names()
['model/softmax_logits_postprocess1',
 'model/softmax_logits_postprocess2',
 'model/softmax_logits_postprocess3']
hef.get_output_stream_infos()
[StreamInfo("model/fc5_290"),
 StreamInfo("model/fc4_288"),
 StreamInfo("model/softmax2"),
 StreamInfo("model/softmax1"),
 StreamInfo("model/fc1_286")]
hef.get_output_vstream_infos()
[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] VStream model/softmax1 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] VStream model/softmax2 not found in sorted output names
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INTERNAL_FAILURE(8)

Hi @Omer

I may have found some issues with how the output layers are resolved when populating metadata. I don’t know how to fix it, but I bypassed the sorting and checking altogether.

This fixed my first problem, but I still had issues using the InferVStreams class. When I tried to do inference on the pipeline , it kept throwing this error:

[2024-08-16 20:08:25.866] [17050] [17050] [HailoRT] [error] [inference_pipeline.cpp:80] [verify_network_inputs_and_outputs] CHECK failed - Not all outputs have been provided for network model/model
[2024-08-16 20:08:25.866] [17050] [17050] [HailoRT] [error] [inference_pipeline.cpp:198] [infer] CHECK_SUCCESS failed with status=HAILO_INVALID_ARGUMENT(2)

Apparently ConfiguredNetworkGroupBase::create_output_vstreams falsely assumes that the post-process ops that run on the host have unique names.

Here are the quick fixes that I made so I can move forward:

If you want to reproduce the issues I can provide my HEF file.

1 Like

Hi @engarlanded_boa,
Thank you for the info. If you look at the HN of your model, does the Softmax layers are in the “output layers order”? for example:
image

Regards,

@Omer

Yup.

"net_params": {
        "version": "1.0",
        "stage": "HN",
        "clusters_placement": [[]],
        "clusters_to_skip": [],
        "output_layers_order": ["model/softmax_logits_postprocess3", "model/softmax1", "model/softmax_logits_postprocess2", "model/softmax_logits_postprocess1", "model/softmax2"],
        "transposed_net": false,
        "net_scopes": ["model"]
    },

Interesting. Can you send the ONNX you used and the HEF that you get the error on?
You can do it either in DM or via email to [email protected]

Regards,