help converting onnx model

I have a classification model (MobileNet-V3-large-1x) in onnx format. I’ve used the hailo dfc-studio to create the model.har file.

I’ve tried to follow the steps here Creating Custom Hef using DFC/Model Zoo

Running the following…
hailo optimise model.har --hw-arch hailo8l --use-random-calib-set

I get the error…
[info] Current Time: 13:20:54, 09/04/24
[info] CPU: Architecture: x86_64, Model: Intel(R) Core™ i7-6700K CPU @ 4.00GHz, Number Of Cores: 8, Utilization: 17.7%
[info] Memory: Total: 15GB, Available: 10GB
[info] System info: OS: Linux, Kernel: 6.5.0-44-generic
[info] Hailo DFC Version: 3.28.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running hailo optimise model.har --hw-arch hailo8l --use-random-calib-set
usage: hailo [-h] [–version] {analyze-noise,compiler,params-csv,parser,profiler,optimize,tb,visualizer,tutorial,har,join,har-onnx-rt,runtime-profiler,dfc-studio,help} …
hailo: error: argument {analyze-noise,compiler,params-csv,parser,profiler,optimize,tb,visualizer,tutorial,har,join,har-onnx-rt,runtime-profiler,dfc-studio,help}: invalid choice: ‘optimise’ (choose from ‘analyze-noise’, ‘compiler’, ‘params-csv’, ‘parser’, ‘profiler’, ‘optimize’, ‘tb’, ‘visualizer’, ‘tutorial’, ‘har’, ‘join’, ‘har-onnx-rt’, ‘runtime-profiler’, ‘dfc-studio’, ‘help’)

I’m a bit lost how to proceed - I can’t find any good tutorials or documentation how to convert an onnx model I’ve already trained.

Hey @oliver9523,

Welcome to the Hailo Community!

Let me offer two suggestions:

  1. There’s a small typo in your code. You’ve used “optimise” instead of “optimize”. This is likely the cause of the error you’re experiencing.
  2. To better understand how DFC (Data Flow Compiler) works, I recommend:

These resources should provide you with a solid foundation and help you avoid similar issues in the future.

If you have any more questions, don’t hesitate to ask. We’re here to help!


Best regards,

ah British error - my bad… now onto the next error…

lib/python3.10/site-packages/hailo_sdk_client/post_fuser/algorithms/normalization_folding.py", line 142, in _fold_post_layer_normalization_layers
raise NormalizationFoldingException(
hailo_sdk_client.post_fuser.algorithms.exceptions.NormalizationFoldingException: normalization layer model/mul_and_add4 (translated from /backbone/features/features.4/conv/conv.5/Concat_1) is a standalone normalization in rank 2 which is not supported

The error you encountered:

NormalizationFoldingException: normalization layer model/mul_and_add4 ... is a standalone normalization in rank 2 which is not supported

indicates that a normalization layer in your model is not being fused correctly. This typically happens when the layer is a standalone normalization layer of rank 2, which is not supported in the Hailo SDK’s current optimization pipeline.

The Hailo SDK expects that certain layers, especially normalization layers, are part of a broader context in the network architecture where they can be folded into adjacent layers like convolution or activation layers. When a normalization layer appears standalone or with specific rank configurations, as in your case (rank 2), this optimization fails because such configurations are not supported by the SDK.

To resolve this:

  1. Model Refactoring: Ensure that the normalization layers are part of a supported structure, like being adjacent to convolution layers.
  2. Rank Adjustments: If possible, modify the normalization layer’s input/output dimensions to be compatible with the SDK. The current SDK has specific support for normalization layers integrated into rank-4 tensors, but not rank-2.

Check the SDK documentation for detailed requirements on layer support and model structures for more insights【17:7†source】.

shows this
Error while compiling to hef from har file

[error] Mapping Failed (allocation time: 34s)
No successful assignment for: format_conversion2, concat18, feature_splitter9, shortcut_softmax1, reduce_max_softmax1, ew_sub_softmax1, reduce_sum_softmax1, ew_mult_softmax1, conv76

[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: No successful assignment for: format_conversion2, concat18, feature_splitter9, shortcut_softmax1, reduce_max_softmax1, ew_sub_softmax1, reduce_sum_softmax1, ew_mult_softmax1, conv76