Hi,
I tried this to convert to hef using these commands
STEP 1 : Parsing
(.venv) taruna@taruna-desktop:~/Hailo/models
$ hailomz parse --yaml=/home/taruna/Hailo/hailo_model_zoo/hailo_model_zoo/cfg/networks/ssd_mobilenet_v1.yaml --hw-arch="hailo8l"
<Hailo Model Zoo INFO> Start run for network ssd_mobilenet_v1 ...
<Hailo Model Zoo INFO> Initializing the runner...
[info] Translation started on Tensorflow model ssd_mobilenet_v1
[info] Start nodes mapped from original model: 'Preprocessor/sub': 'ssd_mobilenet_v1/input_layer1'.
[info] End nodes mapped from original model: 'BoxPredictor_0/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_0/ClassPredictor/BiasAdd', 'BoxPredictor_1/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_1/ClassPredictor/BiasAdd', 'BoxPredictor_2/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_2/ClassPredictor/BiasAdd', 'BoxPredictor_3/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_3/ClassPredictor/BiasAdd', 'BoxPredictor_4/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_4/ClassPredictor/BiasAdd', 'BoxPredictor_5/BoxEncodingPredictor/BiasAdd', 'BoxPredictor_5/ClassPredictor/BiasAdd'.
[info] Translation completed on Tensorflow model ssd_mobilenet_v1 (completion time: 00:00:00.18)
[info] Saved HAR to: /home/taruna/Hailo/models/ssd_mobilenet_v1.har
STEP 2 : Optimizing
(.venv) taruna@taruna-desktop:~/Hailo/models
$ hailo optimize /home/taruna/Hailo/models/ssd_mobilenet_v1.har --calib-set-path calib_set.npy
[info] Current Time: 18:35:51, 11/22/24
[info] CPU: Architecture: x86_64, Model: 12th Gen Intel(R) Core(TM) i7-12700, Number Of Cores: 20, Utilization: 0.1%
[info] Memory: Total: 23GB, Available: 18GB
[info] System info: OS: Linux, Kernel: 6.8.0-40-generic
[info] Hailo DFC Version: 3.29.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running `hailo optimize /home/taruna/Hailo/models/ssd_mobilenet_v1.har --calib-set-path calib_set.npy`
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] Starting Mixed Precision
[info] Mixed Precision is done (completion time is 00:00:00.16)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:07<00:00, 8.34entries/s]
[info] Statistics Collector is done (completion time is 00:00:08.10)
[info] Starting Fix zp_comp Encoding
[info] Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /home/taruna/Hailo/models/ssd_mobilenet_v1_optimized.har
STEP 3: Compile
(.venv) taruna@taruna-desktop:~/Hailo/models
$ hailo compiler /home/taruna/Hailo/models/ssd_mobilenet_v1_optimized.har
[info] Current Time: 18:37:17, 11/22/24
[info] CPU: Architecture: x86_64, Model: 12th Gen Intel(R) Core(TM) i7-12700, Number Of Cores: 20, Utilization: 0.1%
[info] Memory: Total: 23GB, Available: 18GB
[info] System info: OS: Linux, Kernel: 6.8.0-40-generic
[info] Hailo DFC Version: 3.29.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running `hailo compiler /home/taruna/Hailo/models/ssd_mobilenet_v1_optimized.har`
[info] Compiling network
[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Using Single-context flow
[info] Resources optimization guidelines: Strategy -> GREEDY Objective -> MAX_FPS
[info] Resources optimization params: max_control_utilization=75%, max_compute_utilization=75%, max_compute_16bit_utilization=75%, max_memory_utilization (weights)=75%, max_input_aligner_utilization=75%, max_apu_utilization=75%
[info] Using Single-context flow
[info] Resources optimization guidelines: Strategy -> GREEDY Objective -> MAX_FPS
[info] Resources optimization params: max_control_utilization=75%, max_compute_utilization=75%, max_compute_16bit_utilization=75%, max_memory_utilization (weights)=75%, max_input_aligner_utilization=75%, max_apu_utilization=75%
Validating context_0 layer by layer (100%)
+ + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
● Finished
[info] Solving the allocation (Mapping), time per context: 59m 59s
Context:0/0 Iteration 4: Trying parallel mapping...
cluster_0 cluster_1 cluster_2 cluster_3 cluster_4 cluster_5 cluster_6 cluster_7 prepost
worker0 V V * * V V * * V
worker1 V V * * V V * * V
worker2 V V * * V V * * V
worker3 V V * * V V * * V
00:03
Reverts on cluster mapping: 0
Reverts on inter-cluster connectivity: 0
Reverts on pre-mapping validation: 0
Reverts on split failed: 0
[info] Iterations: 4
Reverts on cluster mapping: 0
Reverts on inter-cluster connectivity: 0
Reverts on pre-mapping validation: 0
Reverts on split failed: 0
[info] +-----------+---------------------+---------------------+--------------------+
[info] | Cluster | Control Utilization | Compute Utilization | Memory Utilization |
[info] +-----------+---------------------+---------------------+--------------------+
[info] | cluster_0 | 56.3% | 35.9% | 27.3% |
[info] | cluster_1 | 100% | 96.9% | 82% |
[info] | cluster_4 | 100% | 79.7% | 44.5% |
[info] | cluster_5 | 43.8% | 35.9% | 51.6% |
[info] +-----------+---------------------+---------------------+--------------------+
[info] | Total | 75% | 62.1% | 51.4% |
[info] +-----------+---------------------+---------------------+--------------------+
[info] Successful Mapping (allocation time: 12s)
[info] Compiling context_0...
[info] Bandwidth of model inputs: 2.05994 Mbps, outputs: 1.38943 Mbps (for a single frame)
[info] Bandwidth of DDR buffers: 0.0 Mbps (for a single frame)
[info] Bandwidth of inter context tensors: 0.0 Mbps (for a single frame)
[info] Compiling context_0...
[info] Bandwidth of model inputs: 2.05994 Mbps, outputs: 1.38943 Mbps (for a single frame)
[info] Bandwidth of DDR buffers: 0.0 Mbps (for a single frame)
[info] Bandwidth of inter context tensors: 0.0 Mbps (for a single frame)
[info] Building HEF...
[info] Successful Compilation (compilation time: 2s)
[info] Compilation complete
[info] Saved HEF to: /home/taruna/Hailo/models/ssd_mobilenet_v1.hef
[info] Saved HAR to: /home/taruna/Hailo/models/ssd_mobilenet_v1_compiled.har
Question
- After running these steps when i run the pipeline with model it throws this error
gst-launch-1.0 filesrc location=$TAPPAS_WORKSPACE/apps/h8/gstreamer/resources/mp4/detection.mp4 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue ! hailonet hef-path=/home/vk/Downloads/ssd_mobilenet_v1.hef is-active=true ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter so-path=$TAPPAS_WORKSPACE/apps/h8/gstreamer/libs/post_processes/libmobilenet_ssd_post.so qos=false ! queue ! hailooverlay ! videoconvert ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Redistribute latency...
Redistribute latency...
Redistribute latency...
terminate called after throwing an instance of 'std::invalid_argument'
what(): No tensor with name ssd_mobilenet_v1/nms1
Aborted
And if i directly download the HEF from the repo model zoo then it works fine , so is there something missing/wrong in the compilation step
gst-launch-1.0 filesrc location=$TAPPAS_WORKSPACE/apps/h8/gstreamer/resources/mp4/detection.mp4 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue ! hailonet hef-path=/home/vk/Desktop/ssd_mobilenet_v1.hef is-active=true ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter so-path=$TAPPAS_WORKSPACE/apps/h8/gstreamer/libs/post_processes/libmobilenet_ssd_post.so qos=false ! queue ! hailooverlay ! videoconvert ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Redistribute latency...
Redistribute latency...
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Redistribute latency...
New clock: GstSystemClock
Last Question if i download random any onnx model from the repo onnx/models then sometimes i get the unsupported dynamic error and sometimes different so I’m not able to understood whats the exact way that by following that i get to convert any model to hef.