Creating Custom Hef using DFC/Model Zoo

this is a guide on how to create custom hef using the DFC or hailo model zoo

first of all lets make sure we have the hailo model zoo installed into the virtual env . if not please make sure to follow the Read Me guide in

Now lets make sure the DFC is also installed in the same virtual environment , for more info on how to install DFC you can find under data flow compiler in https://hailo.ai/developer-zone/documentation/

Now let’s start making the model :

Using DFC :

hailo parser onnx/tf <model_path> --start-node-names <start_node_name> --end-nodes-names <end_nodes_name> --hw-arch {hailo8,hailo8r,hailo8l,hailo15h,hailo15m}

Now we have a HAR file that we can use for the next step :

hailo optimize {har_path} [–hw-arch {hailo8,hailo8r,hailo8l,hailo15h,hailo15m}] (–calib-set-path CALIBRATION_SET_PATH ( tf record ) OR --use-random-calib-set) --model-script MODEL_SCRIPT ( .alls file that can be found in hailo model zoo)

Now we have a model_name_optimized.har :

hailo compiler {har_path_optimized} [–hw-arch {hailo8,hailo8r,hailo8l,hailo15h,hailo15m}] [–model-script MODEL_SCRIPT (.alls file)] --performance

the{ --performance} will give a better performance for the model but will signifcantly increase the compiling time for the model

Using Hailo Model Zoo :

hailomz parse [–yaml YAML_PATH] [–ckpt CKPT_PATH (path to onnx or ckpt to use for parsing)] [–hw-arch] --start-node-names --end-node-names

Now we have the Har File , we use to :

hailomz optimize {model_name } [–har HAR_PATH ( our external har file)] [–yaml YAML_PATH] [–ckpt CKPT_PATH] [–hw-arch] [–calib-path CALIB_PATH] [–model-script MODEL_SCRIPT_PATH] [–performance]

Now we have a quantized/optimized Har that we gonna use next :

hailomz compile [-h] [–yaml YAML_PATH] [–ckpt CKPT_PATH] [–hw-arch] [–start-node-names START_NODE_NAMES [START_NODE_NAMES …]] [–end-node-names END_NODE_NAMES [END_NODE_NAMES …]]
[–har HAR_PATH] [–calib-path CALIB_PATH] [–model-script MODEL_SCRIPT_PATH] [–performance] [–resize RESIZE [RESIZE …] (input resize) ]
[model_name]

For more info on any of the runs you can just run [-h] option and will have the full documentation on what each option represents.

Enjoy Creating Models and Share Your Experiences!
Please don’t hesitate to reach out and share your creations, questions, or concerns. We value your input and feedback, as it helps us continuously improve our offerings and support to better meet the needs of our users.

Looking forward to seeing the amazing models you develop and learning more about your experiences. Let us know how we can best assist you on your model creation journey!

2 Likes

@omria @Nadav i geting error while runing hailo optimize /home/trinity-gpu/best.har –hw-arch hailo8l --use-random-calib-set --model-script /home/trinity-gpu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8m.alls

error as bellow
raceback (most recent call last):
File “/home/trinity-gpu/anaconda3/envs/rpi5/bin/hailo”, line 8, in
sys.exit(main())
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/tools/cmd_utils/main.py”, line 111, in main
ret_val = client_command_runner.run()
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/tools/cmd_utils/base_utils.py”, line 68, in run
return self._run(argv)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/tools/cmd_utils/base_utils.py”, line 89, in _run
return args.func(args)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/tools/optimize_cli.py”, line 113, in run
self._runner.optimize_full_precision(calib_data=dataset)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1996, in optimize_full_precision
self._optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1999, in _optimize_full_precision
self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1483, in optimize_full_precision
model, params = self._apply_model_modification_commands(model, params, update_model_and_params)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1374, in _apply_model_modification_commands
model, params = command.apply(model, params, hw_consts=self.hw_arch.consts)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 379, in apply
self._update_config_file(hailo_nn)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 539, in _update_config_file
self._update_config_layers(hailo_nn)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 587, in _update_config_layers
self._set_yolo_config_layers(hailo_nn)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 615, in _set_yolo_config_layers
conv_layers = self._get_output_preds(hailo_nn)
File “/home/trinity-gpu/anaconda3/envs/rpi5/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 597, in _get_output_preds
raise AllocatorScriptParserException(
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Error in the last layers of the model, expected conv but found LayerType.activation layer.

Hi @omria @Nadav can yoi plz solve this problem

hi @omria can any one solve this

Thanks @omria for the instructions.

I have been suffering to find a clear path to retrain Yolov8 models, especially segmentation models, and generating .hef file for a custom dataset with number of classes different from the CoCo dataset.

There are things that are not clear to me. For example,

  • Is it important to know the start/end node names and use them in the parsing step? If yes, how to get the correct node names ? Can I use Triton to get the names. When I used Triton, it showed input name as ‘images’ and output names as ‘output0’ and ‘output1’. Do I use these ones in the parsing step?

  • For the yaml file, Do I need to modify the yaml file for my custom model? What do I need to modify exactly ? I noticed that the yaml file of, for example, yolov8n-seg.yaml in your ultralytics fork is different from the yolov8n_seg.yaml in the hailo model zoo. Are these used for different purposes ? my question is how to write a correct yaml file of my custom model to be passed in the optimize step ?

  • What is the --model-script? What is it used for? Do I also need to write the correct model script for my custom model for the compilation process to work correctly ?

I am not able to find clear answers to these questions in the community discussions . I am using the AI Suite docker image to compile my custom onnx model on a Windows machine with WSL2. However, I was never able to get a .hef file. I looked at the issues in this community and non of them worked form me.

I would be really grateful if you can provide detailed instructions on how to compile customized model.

Thank you for your great products!

Hey @mzahana

To help you retrain and compile YOLOv8 models (especially segmentation models) with a custom dataset, generating a .hef file, here are answers to each of your questions, along with some detailed steps to guide you through the process:

1. Start/End Node Names in Parsing Step

  • Importance: Yes, specifying the correct start and end node names is important to ensure that the model’s inputs and outputs are mapped correctly during parsing and compilation.
  • How to Find Node Names: You can use tools like Triton, Netron, or ONNX Graph Surgeon (from NVIDIA’s TensorRT) to inspect the model structure and obtain the node names.
    • In Triton, the input name (images) and output names (output0, output1) are commonly used for YOLO models. However, it’s best to verify these by examining the model directly with Netron or similar tools to confirm.
  • Setting Node Names: Use these node names (e.g., images for input, output0 and output1 for outputs) in the parsing step when configuring the Hailo-8 compilation pipeline.

2. YAML File Configuration

  • Purpose: The YAML file provides metadata for the model and specifies details such as the number of classes, input dimensions, and anchor boxes (for object detection models).
  • Custom Modifications:
    • Update the YAML file to match your custom dataset:
      • Classes: Set the num_classes parameter to the number of classes in your dataset.
      • Anchors: Modify anchors if necessary, although for segmentation models, this may be less critical.
      • Input Shape: Ensure the input dimensions align with your custom model.
    • Different YAMLs: The YAML file in Hailo’s model zoo (yolov8n_seg.yaml) may have different settings than the one in Ultralytics’ YOLO repo (yolov8n-seg.yaml). Hailo’s version is often tailored for compatibility with their processing pipeline, so use the Hailo YAML file as a reference, but update it for your specific dataset.

3. --model-script Parameter

  • Definition: The --model-script parameter specifies a Python script that defines custom preprocessing or postprocessing steps needed for your model. This can include data transformations, input/output adjustments, or other custom processing logic that is specific to your model.
  • Need for Custom Models: For custom models, you may need to define a model script if your model’s preprocessing steps differ from the default or if the output format needs modification before use. Ensure this script matches the format and structure required for Hailo’s processing pipeline.

4. Compiling Custom Models with Hailo-8

  • Training and Export:
    1. Train or fine-tune your YOLOv8 model using Ultralytics’ YOLOv8 framework.
    2. Export the trained model to ONNX format.
  • Compilation with Hailo AI Suite:
    1. Place the ONNX model in the Hailo AI Suite Docker environment.
    2. Use the hailo_model_optimizer tool to optimize the model, providing the modified YAML file and --model-script if applicable.
    3. Run the hailo_model_compiler to compile the model into a .hef file. Ensure you specify the correct start and end nodes here.

Example YAML and Compilation Command

  • Modify the YAML for your model:
    num_classes: <your_number_of_classes>
    input_shape: [640, 640]  # Adjust according to your model
    # other parameters as needed
    
  • Example command to compile:
    hailo_model_optimizer --model my_model.onnx --yaml-path custom_model.yaml --model-script custom_script.py
    hailo_model_compiler --hef-path my_model.hef --model-optimized custom_model_optimized.hmo
    

Troubleshooting Tips

  • Node Errors: If errors persist related to node names, verify that you are using the correct start and end nodes.
  • Community and Documentation: Check Hailo’s documentation and community forums, as updates or additional resources may be available for custom compilation workflows.

These steps should help you compile a custom YOLOv8 segmentation model for use with Hailo-8. If you have further questions, please feel free to reach out!

Thanks @omria

I followed the steps in this response. I think it is an AI-generated response, as I could not find a way to execute the hailo_model_optimizer and the hailo_model_compiler commands. I could not find them in the documentation.

I am still not able to generate .HEF files for my custom yolov8n_seg following your suggested steps.

I have submitted a post regarding the issues I am facing. The post title is “Custom Yolov8n_seg: No valid partition found”, but it’s pending approval.

I also tried to send you the ONNX file to investigate it if possible, but the DM service does not allow onnx attachments.

I hope to be able to get a solution for my issues. Otherwise, the Hailo accelerators are not usable to me.

Thanks.