I’m trying to deploy a yolov8 model trained on a custom dataset on my RPI 5 Ai Kit, but got stuck during parsing.
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape: Failed to determine type of layer to create in node /model.22/dfl/Reshape
UnsupportedShuffleLayerError in op /model.22/dfl/Transpose: Failed to determine type of layer to create in node /model.22/dfl/Transpose
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape_1: Failed to determine type of layer to create in node /model.22/dfl/Reshape_1
Please try to parse the model again, using these end node names: /model.22/Concat_3
Apparently, it “failed to determine type of layer to create” for two Reshape and one Transpose nodes. Is this because of the limitations for certain ONNX operations, mentioned in the User Guide ch. 5.1.3, or could this be something else?
I’m running the following python script for parsing:
I am aware that the error message suggests changing the end_node, but this would cut off a large portion of the model. I don’t think that would work very well.
Should I try to re-train the model, and hope it doesn’t try to use the unsupported ONNX operations? Would love any suggestions here.
My first recommendation for models that we have in our Hailo Model Zoo is to compare your model with the one in the Model Zoo.
Inside the Hailo AI Software Suite docker call:
hailomz parse model_name
This will download the source and place it inside the /loacal/shared_with_docker/.hailomz/ directory and it will parse the model into a HAR file. You can then compare the original ONNX and the HAR file with your own model files using Netron.
You can also look at the YAML and ALLS scripts inside the /local/workspace/hailo_model_zoo directory. They will provide you with a good inside on how to convert the models.
Please note that large portion does not necessarily mean compute intensive. When you look at a network in Netron some layers only have a “small” box but require many operations to execute while other parts of the network have many boxes that only require little compute and run better on a CPU.
We call these parts pre- and post-processing. For some popular models like the YOLO family, we do provide these functions. You can add them via the ALLS script. Have a look at the files for yolov8. Look for nms_postprocess.
If you want it easy you can use our retraining docker and convert the model using the Model Zoo.
@klausk Thanks for the info. I can see there’s quite a few differences between the official yolov8s model from Ultralytics and the one from the Model Zoo.
It seems like all pretrained yolo-models in the Model Explorer conforms to this “simplified Hailo format”. However, they are only available as .onnx. Are there any ways to get hold of .pt versions of these “simplified” models?
That would probably make the retraining and compilation process a lot smoother.
Btw: I tried to use the Model Zoo retraining docker as you suggested, but I still get the same result. Must have missed something, I guess…
<Hailo Model Zoo INFO> Start run for network yolov8s ...
<Hailo Model Zoo INFO> Initializing the hailo8l runner...
<Hailo Model Zoo WARNING> Hailo8L support is currently at Preview on Hailo Model Zoo
[info] Translation started on ONNX model yolov8s
[info] Restored ONNX model yolov8s (completion time: 00:00:00.15)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.73)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:01.95)
.
.
.
hailo_sdk_client.model_translator.exceptions.ParsingWithRecommendationException: Parsing failed. The errors found in the graph are:
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape: Failed to determine type of layer to create in node /model.22/dfl/Reshape
UnsupportedShuffleLayerError in op /model.22/dfl/Transpose: Failed to determine type of layer to create in node /model.22/dfl/Transpose
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape_1: Failed to determine type of layer to create in node /model.22/dfl/Reshape_1
Please try to parse the model again, using these end node names: /model.22/Concat_3
Just popping in to say I’m getting the same errors. I had a yolov8s-based model originally trained on Supervisely and attempted to parse it using the DFC Parsing tutorial, but ran into the following errors:
ParsingWithRecommendationException: Parsing failed. The errors found in the graph are:
UnsupportedShuffleLayerError in op Reshape_290: Failed to determine type of layer to create in node Reshape_290
UnsupportedShuffleLayerError in op Transpose_291: Failed to determine type of layer to create in node Transpose_291
UnsupportedShuffleLayerError in op Reshape_295: Failed to determine type of layer to create in node Reshape_295
UnsupportedLogitsLayerError in op ArgMax_334: ArgMax layer ArgMax_334 has unsupported axis 2.
I checked out the model with Netron as recommended, and I was able to identify the nodes that were causing issues, but I’m not sure how one would go about resolving those issues given that these nodes are generated through retraining with the yolov8 model.
I retrained the model using [the Model Zoo’s retraining instructions, but the same errors popped up (in earlier nodes this time)
ParsingWithRecommendationException: Parsing failed. The errors found in the graph are:
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape: Failed to determine type of layer to create in node /model.22/dfl/Reshape
UnsupportedShuffleLayerError in op /model.22/dfl/Transpose: Failed to determine type of layer to create in node /model.22/dfl/Transpose
UnsupportedShuffleLayerError in op /model.22/dfl/Reshape_1: Failed to determine type of layer to create in node /model.22/dfl/Reshape_1
Please try to parse the model again, using these end node names: /model.22/Concat_3
Compiling (as recommended above) doesn’t help, because the problem is occurring in the parsing step, which is automatically triggered if the compiler detects that the model hasn’t been parsed.
Also, it would be helpful if more links were used in responses (ex when referencing the ALLS script and so on.) I attempted to use links in my response but they seem to be disabled for regular users.
Looking forward to some guidance on this. Thank you!
When you train a model using the re-training docker created with the Hailo Model Zoo and you provide the yaml file for the model to hailomz, the parsing step should be done correctly and automatically. The yaml file contains the parsing information and points to the ALLS script as well.
I was able to retrain and compile the model successfully after switching to using hailo commands in cli rather than the Jupyter tutorial (not that there is any difference, it was just helpful to experiment with interactive mode etc.)
The process aligned well with the instructions, but I’m still going to provide a summary for reference in case anyone is just struggling to piece things together:
In my case, I had already trained the model on Supervisely, so I just used the “Convert to YoloV5” tool to export the training data and annotations in Yolo format (Its YoloV8 compatible, despite the name)
After exporting I just made sure the config file (referenced as dataset.yamlin the retraining instructions) matched the format of the coco128.yaml file, and updated the train/val/path stanzas to point to my exported data.
Within the training docker container, retrain the model and convert it to onnx as per the instructions on the retraining page.
Compile (I moved my retraining results to my working directory before doing this)