How can I train my own custom UNet_MobileNetV2 model?

Hello,

I couldn’t find a tutorial on GitHub for training a UNet with MobileNetV2 as the backbone. Could you provide guidance on how to train my own model from scratch?

Additionally, when exporting the trained model, do the node names need to match those provided by Hailo for compatibility? If so, is there a specific naming convention or structure I should follow to ensure smooth integration?

Thank you!

We do provide retraining docker for some popular models in our Model Zoo. However we usually start from a fully trained model exported to ONNX or TFLite.

Maybe one of our users can give you some hints.

Regarding the node names. You will need to give the parser the start- and end-node names of your model. No need to format them.
Try the following:

  • Parse the model using the CLI without start- and end-node names as see whether the parser gives you a recommendation.
  • Review you model in Netron to find the right start- and end-nodes and names.
  • Try the DFC-Studio preview in the Hailo AI Software Suite docker
hailo dfc-studio

Load the model and review it in the GUI. The layers are color coded and should allow you to identify the start- and end-nodes and their names.

Hi @user156, thank you for your reply.
I didn’t find the retraining Docker for U-Net on the Model Zoo Github. Will it be provided?

Sorry, for the misunderstanding. We do not support training except for the few retraining Docker in GitHub.

I asked ChatGPT the following question and it looks like it might be able to help you. Give it a try. The instructions looked really good.

Can you help with training instructions for a UNet with MobileNetV2 as the backbone?

I have trained a UNet-MobileNetV2 model and successfully converted it to ONNX format. Now, I would like to convert the ONNX model to HEF format. Could you guide me through the process?

Additionally, after converting to HEF, is there an example program available to run the model? I searched the official GitHub repository, but most of the available scripts seem to focus on YOLO-based model conversion.

Any guidance or references would be greatly appreciated.

The Hailo AI Software suite contains tutorials for the entire model conversion process. After you start the Docker simply run the following command:

hailo tutorial

This will start a Jupyther notebook server with notebooks for ach step.

You can run the model in the build in emulator at each step of the conversion to validate the model.

For measuring FPS, PCIe bandwidth and power you can use the HailoRT run command.

hailortcli run model.hef

For your application I would recommend to look at our Tappas examples and Application Code repository.

GitHub - Hailo Application Code Examples