Dataflow compiler best practice

Are there recommendations to what is the best practice when running a model through the Dataflow Compiler pipeline? Parsing, optimization, compilation etc.

Parsing -

  1. if using an ONNX model, the ONNX should be simplified using onnx-simplifier Python package.
  2. Mostly for detection models (but not only), end nodes of the model should be the last neural operations which after it the postprocess operation start.
    Optimization\quantization -
  3. The model should have the real pre-trained weights.
  4. The calibration set should be with real images that are a subset of the training dataset.
  5. Calibration set images should either be normalized before running optimization, or by adding a normalization layer using an alls command.
  6. The model should be trained with Batch Normalization in order to limit the range of the output values from the activations layers.
    Compilation -
    Not always necessary, but to get the best FPS from a compiled model, you can use the following alls command:
    performance_param(compiler_optimization_level=max)

How to download it ? I need it

Hi @ruige_hf ,

To obtain the Dataflow Compiler, there are several options available:

  1. Install the Hailo AI Software Suite:
    You can find a detailed step-by-step guide for installation in the following wiki:
    Hailo AI Software Suite Installation - RidgeRun Developer Wiki
  2. Install the Dataflow Compiler Only:
    If you prefer to install only the Dataflow Compiler, you can download it from the Developer Zone:
    Hailo Developer Zone - Software Downloads
    Follow this guide for installation:
    Dataflow Compiler Installation Guide

I hope this information helps!

Best regards,

Oscar Mendez
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.ai
Developers wiki: Hailo AI Platform Guide
Website: www.ridgerun.ai