Converting TFLite Model (EfficientDet Lite 0) to HAR/HEF Using Hailo SDK

Hello everyone,

I am working on converting a TFLite model trained with EfficientDet Lite 0 into a HAR file and then into HEF for deployment on Hailo.

Recently, I successfully converted a YOLOv8n model to HAR using the following approach:
Colab Sheet

  • Used from hailo_sdk_client import ClientRunner
  • Applied runner.translate_onnx_model to generate the HAR file

However, since my current model is in TFLite format, I am unsure of the correct workflow. Do I need to first convert the TFLite model to ONNX before translating it to HAR? Or is there a direct method within Hailo SDK for converting TFLite models?

Any guidance or recommended steps would be greatly appreciated!

Thank you in advance! :blush:

1 Like

Hi @kasun.thushara.1800,

You can pass a tflite file directly to the parser, without having to convert to onnx.

You can see an example of this can be done in our DFC Tutorial:
https://hailo.ai/developer-zone/documentation/dataflow-compiler-v3-29-0/?page=tutorials_notebooks%2Fnotebooks%2FDFC_1_Parsing_Tutorial.html#Parsing-Example-from-TensorFlow-Lite

1 Like

Thank you for the reply
To convert a TensorFlow model to a HAR file using hailo_sdk_client and the translate_tf_model function, I’m unsure whether I need to specify the start and end nodes explicitly.

I attempted to translate the following pretrained EffifcentDet Lite0 model from the Hailo Model Zoo: and I used DFC 3.30.0
The error massage is :
UnsupportedSliceLayerError in op strided_slice_13: Found new axis or shrink axis in slice node strided_slice_13, which is not supported UnsupportedShuffleLayerError in op resize/ExpandDims: Failed to determine type of layer to create in node resize/ExpandDims (RESHAPE) UnsupportedShuffleLayerError in op resize/Squeeze1: Failed to determine type of layer to create in node resize/Squeeze1 (RESHAPE) UnsupportedModelError in op strided_slice_6: 1D form is not supported in layer strided_slice_6 of type SliceLayer. UnsupportedShuffleLayerError in op pad_to_bounding_box/ExpandDims: Failed to determine type of layer to create in node pad_to_bounding_box/ExpandDims (RESHAPE) UnsupportedShuffleLayerError in op Reshape_1: Failed to determine type of layer to create in node Reshape_1 (RESHAPE) UnsupportedShuffleLayerError in op Reshape: Failed to determine type of layer to create in node Reshape (RESHAPE) UnsupportedReduceMaxLayerError in op Max: Failed to create reduce max layer at vertex Max. Reduce max is only supported on the features axis, and with keepdim=True UnsupportedSliceLayerError in op strided_slice_203: Found new axis or shrink axis in slice node strided_slice_203, which is not supported UnsupportedSliceLayerError in op strided_slice_19: Found new axis or shrink axis in slice node strided_slice_19, which is not supported UnsupportedShuffleLayerError in op Reshape_3: Failed to determine type of layer to create in node Reshape_3 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_2: Failed to determine type of layer to create in node Reshape_2 (RESHAPE) UnsupportedOperationError in op NonMaxSuppressionV5: NON_MAX_SUPPRESSION_V5 operation is unsupported UnsupportedShuffleLayerError in op stack_4: Failed to determine type of layer to create in node stack_4 (RESHAPE) UnsupportedSliceLayerError in op strided_slice_181: Found new axis or shrink axis in slice node strided_slice_181, which is not supported UnsupportedShuffleLayerError in op stack_51: Failed to determine type of layer to create in node stack_51 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_5: Failed to determine type of layer to create in node Reshape_5 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_4: Failed to determine type of layer to create in node Reshape_4 (RESHAPE) UnsupportedShuffleLayerError in op clip_by_value;clip_by_value/y;clip_by_value/Minimum;stack_3;clip_by_value/Minimum/y1: Failed to determine type of layer to create in node clip_by_value;clip_by_value/y;clip_by_value/Minimum;stack_3;clip_by_value/Minimum/y1 (RESHAPE) UnsupportedSliceLayerError in op strided_slice_22: Found new axis or shrink axis in slice node strided_slice_22, which is not supported UnsupportedSliceLayerError in op strided_slice_23: Found new axis or shrink axis in slice node strided_slice_23, which is not supported UnsupportedSliceLayerError in op strided_slice_24: Found new axis or shrink axis in slice node strided_slice_24, which is not supported UnsupportedSliceLayerError in op strided_slice_25: Found new axis or shrink axis in slice node strided_slice_25, which is not supported UnsupportedShuffleLayerError in op Reshape_7: Failed to determine type of layer to create in node Reshape_7 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_6: Failed to determine type of layer to create in node Reshape_6 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_9: Failed to determine type of layer to create in node Reshape_9 (RESHAPE) UnsupportedShuffleLayerError in op Reshape_8: Failed to determine type of layer to create in node Reshape_8 (RESHAPE) Please try to parse the model again, using these start node names: pad_to_bounding_box/Pad Please try to parse the model again, using these end node names: class_net/class-predict_2/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_2/separable_conv2d;class_net/class-predict/bias, class_net/class-predict_1/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_1/separable_conv2d;class_net/class-predict/bias, class_net/class-predict/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict/bias, box_net/box-predict_3/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_3/separable_conv2d;box_net/box-predict/bias, class_net/class-predict_3/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_3/separable_conv2d;class_net/class-predict/bias, class_net/class-predict_4/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_4/separable_conv2d;class_net/class-predict/bias, box_net/box-predict_2/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_2/separable_conv2d;box_net/box-predict/bias, box_net/box-predict/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict/bias, box_net/box-predict_4/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_4/separable_conv2d;box_net/box-predict/bias, box_net/box-predict_1/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_1/separable_conv2d;box_net/box-predict/bias
Any guidance would be greatly appreciated.

If you’re interested in the pre-trained version, you can download the already compiled model from our ModelZoo. For example for Hailo-8:

To understand the process, I first tested with a pretrained EfficientDet-Lite0 model (link) and later trained my own model. My goal is to successfully convert the model into a HEF file.

I used Netron to inspect the model and determine the start and end nodes. Here’s the script I used for translation:

from hailo_sdk_client import ClientRunner
model_name = "tflite_model"
model_path = "/content/drive/MyDrive/efficientdet-lite0.tflite"  
chosen_hw_arch = "hailo8l"  
runner = ClientRunner(hw_arch=chosen_hw_arch)
start_node_name = "pad_to_bounding_box/Pad"
end_node_names = [
    "class_net/class-predict/separable_conv2d",
    "class_net/class-predict_1/separable_conv2d",
    "class_net/class-predict_2/separable_conv2d",
    "class_net/class-predict_3/separable_conv2d",
    "class_net/class-predict_4/separable_conv2d",
    "box_net/box-predict/separable_conv2d",
    "box_net/box-predict_1/separable_conv2d",
    "box_net/box-predict_2/separable_conv2d",
    "box_net/box-predict_3/separable_conv2d",
    "box_net/box-predict_4/separable_conv2d"
]


try:
    hn, npz = runner.translate_tf_model(
        model_path, 
        model_name,
        start_node_names=start_node_name,
        end_node_names=end_node_names
    )
    print("Model translation successful.")
except Exception as e:
    print(f"Error during model translation: {e}")
    raise


hailo_model_har_name = f"{model_name}_hailo_model.har"
try:
    runner.save_har(hailo_model_har_name)
    print(f"HAR file saved as: {hailo_model_har_name}")
except Exception as e:
    print(f"Error saving HAR file: {e}")  

However, I encountered the following error:

Error during model translation: End node class_net/class-predict/separable_conv2d wasn't found in TF model. Did you mean one of these? 
{'class_net/class-predict_4/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_4/separable_conv2d;class_net/class-predict/bias', 
'class_net/class-predict_3/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_3/separable_conv2d;class_net/class-predict/bias', 
...
}

Any guidance would be greatly appreciated.

The error message suggests that the layer names that you have picked are incorrect.

1 Like


Could someone kindly confirm if the end nodes I identified are correct for EfficientDet-Lite0? If not, could you please guide me on the correct ones? Your guidance would be extremely helpful, and I truly appreciate any support you can provide. Thank you in advance!

To clarify further, the error message says that the layer names that you used are not in the network. This means that there was some mistake in the layer names and it is unrelated to which end nodes you have selected.

These are the end-node names used by our ModelZoo:

- box_net/box-predict/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict/bias
- class_net/class-predict/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict/bias
- box_net/box-predict_1/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_1/separable_conv2d;box_net/box-predict/bias
- class_net/class-predict_1/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_1/separable_conv2d;class_net/class-predict/bias
- box_net/box-predict_2/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_2/separable_conv2d;box_net/box-predict/bias
- class_net/class-predict_2/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_2/separable_conv2d;class_net/class-predict/bias
- box_net/box-predict_3/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_3/separable_conv2d;box_net/box-predict/bias
- class_net/class-predict_3/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_3/separable_conv2d;class_net/class-predict/bias
- box_net/box-predict_4/BiasAdd;box_net/box-predict/separable_conv2d;box_net/box-predict_4/separable_conv2d;box_net/box-predict/bias
- class_net/class-predict_4/BiasAdd;class_net/class-predict/separable_conv2d;class_net/class-predict_4/separable_conv2d;class_net/class-predict/bias
1 Like

Oh, thank you so much for your help! I really appreciate it. I was able to translate the model successfully.

For optimization, what should I specify as the meta_arch for the NMS model? Should it be efficientdetlite0 or efficientdet_lite0? I tried both names but couldn’t find the correct one.

nms_postprocess("/content/drive/MyDrive/nms_configs/efficientdet_nms_layer_config.json", meta_arch="efficientdet_lite", engine=cpu)

I would greatly appreciate any guidance on this

@kasun.thushara.1800
Please try meta_arch=ssd.

1 Like

@shashi Thank you! It worked, and I was able to compile it.

However, I am unable to run my compiled model on the Raspberry Pi. I used two classes—‘sprite’ and ‘cola’—but I do not see any bounding boxes. I also used a labels.json file with the mentioned labels.

I will share my Colab notebook where I compiled the model from TFLite to HEF:
Colab Link

I would really appreciate any insights or suggestions. Thank you!

Hi @kasun.thushara.1800
Can you get the output stream info? Typically, these models output results in a format that need further processing. See our guide on this topic: User Guide 2: Running Your First Object Detection Model on a Hailo Device Using DeGirum PySDK

I hope you are asking about this

Hi @kasun.thushara.1800
I was talking about info about model output:

from hailo_platform import HEF

hef = HEF("your_hef_path")
output_vstream_info = hef.get_output_vstream_infos()

print("Outputs")
for output_info in output_vstream_info:
  print(output_info)
1 Like

i have run this in rpi environment and got this output !!!

Outputs
VStreamInfo("sprite_cola_model/ssd_nms_postprocess")

Hi @kasun.thushara.1800
Ok, this indicates that the output in fact is a single tensor after NMS. Did you get a chance to read our user guide? You can follow that to interpret the output. Alternatively, if you follow User Guide 3: Simplifying Object Detection on a Hailo Device Using DeGirum PySDK, you can integrate the model to PySDK and use the post-processor to get the desired outputs.

I configured NMS when compiling using the following settings. The “w” and “h” values were copied from the anchor box widths and heights found here:

nms_layer_config = {
    "nms_scores_th": 0.001,
    "nms_iou_th": 0.5,
    "image_dims": [320, 320],
    "max_proposals_per_class": 100,
    "centers_scale_factor": 1,
    "bbox_dimensions_scale_factor": 1,
    "classes": 2,
    "background_removal": True,
    "background_removal_index": 0,
    "bbox_decoders": [
        {
            "w": [0.075, 0.10606602, 0.05303301, 0.09449408, 0.13363481,
                  0.06681741, 0.11905508, 0.1683693, 0.08418465],
            "h": [0.075, 0.05303301, 0.10606602, 0.09449408, 0.06681741,
                  0.13363481, 0.11905508, 0.08418465, 0.1683693],
            "reg_layer": "sprite_cola_model/conv65",
            "cls_layer": "sprite_cola_model/conv66"
        },
        {
            "w": [0.15, 0.21213203, 0.10606602, 0.18898816, 0.26726963,
                  0.13363481, 0.23811015, 0.3367386, 0.1683693],
            "h": [0.15, 0.10606602, 0.21213203, 0.18898816, 0.13363481,
                  0.26726963, 0.23811015, 0.1683693, 0.3367386],
            "reg_layer": "sprite_cola_model/conv74",
            "cls_layer": "sprite_cola_model/conv75"
        },
        # More bbox decoders...
    ]
}

also modified my detection pipeline using GStreamer, as shown below:

class GStreamerDetectionApp(GStreamerApp):
    def __init__(self, app_callback, user_data):
        parser = get_default_parser()
        parser.add_argument(
            "--labels-json",
            default=None,
            help="Path to custom labels JSON file",
        )
        args = parser.parse_args()
        super().__init__(args, user_data)

        # Set Hailo model parameters
        self.batch_size = 1
        self.network_width = 320
        self.network_height = 320
        self.network_format = "RGB"
        nms_score_threshold = 0.001
        nms_iou_threshold = 0.5

still not able to see bounding boxes in the output properly. I could see a bounding box very low accurately and not precisely.
Thanks

Hi @kasun.thushara.1800
Unfortunately, I do not know the inner workings of the post-processor itself. Hopefully, someone from Hailo team can help.