Hi,
I have a yolo11m model which is trained at 1920x1920 on a custom dataset of 10 classes and exported it to onnx at 1088x1920(H,W) with opset 11.
I want to convert that onnx model it hailo .hef using the hailo AI suite docker for running the inference on a hailo8.
I tried to follow the yolov8 retraining example given in the hailo model zoo git.
for the calib dataset I am using the same validation dataset used in the model training. the dataset has more than 3000 images with different resolutions going upto 2K.
In the tutorial I saw that I have to generate a tf record using the create_coco_tfrecord.py file. I generated the tf record for my custom dataset with coco annotations.
then I tried to run the model compilation using hailo mz after modifying the yolo11m.yaml and base/yolo.yaml and yolov11m_nms_config.yaml with input size as 1088,1920.
hailomz compile –ckpt atr_yolo11m_hailo_op11_1088x1920.onnx
–calib-path val.tfrecord –yaml yolov11m.yaml –classes 10 –hw-arch hailo8 –performance
I get this error:-
"tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__ReduceDataset_Targuments_0_Tstate_1_output_types_1_device_/job:localhost/replica:0/task:0/device:CPU:0}} Error in user-defined function passed to MapDataset:3 transformation with iterator: ReduceIterator::Root::Prefetch::FiniteTake:
Paddings must be non-negative: 0 -88 [[{{node PadV2}}]] [Op:ReduceDataset] name:
Just to give a try , when I used the tfrecord generated with coco val2017 dataset , I don’t see this error and the compilation proceeds.
In the tutorial I also saw that the calib dataset can be a dir of images too. so when i pointed the hailomz –calib-path to the dir containing the val images of my custom dataset, the hailomz proceeeds with the model optimisation , but then fails to compile at the end with the following error.
512/512 ━━━━━━━━━━━━━━━━━━━━ 751s 1s/step - _distill_loss_yolov11m/conv102: 0.4609 - _distill_loss_yolov11m/conv105: 0.6831 - _distill_loss_yolov11m/conv67: 0.3585 - _distill_loss_yolov11m/conv71: 0.3621 - _distill_loss_yolov11m/conv74: 0.5544 - _distill_loss_yolov11m/conv83: 0.6819 - _distill_loss_yolov11m/conv87: 0.3968 - _distill_loss_yolov11m/conv90: 0.4695 - _distill_loss_yolov11m/conv99: 0.6219 - total_distill_loss: 4.5892
[info] Model Optimization Algorithm Quantization-Aware Fine-Tuning is done (completion time is 00:50:42.38)
[info] Starting Layer Noise Analysis
Full Quant Analysis: 100%|█████████████████████████████████████████| 8/8 [02:00<00:00, 15.10s/iterations]
[info] Model Optimization Algorithm Layer Noise Analysis is done (completion time is 00:02:04.91)
[info] Model Optimization is done
[info] Saved HAR to: /local/shared_with_docker/yolov11/yolov11m.har
Using generic alls script found in /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov11m.alls because there is no specific hardware alls
[info] Loading model script commands to yolov11m from /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov11m.alls
[info] ParsedPerformanceParam command, setting optimization_level(max=2)
[info] Appending model script commands to yolov11m from string
[info] ParsedPerformanceParam command, setting optimization_level(max=2)
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Adding an output layer after conv71
[info] Adding an output layer after conv74
[info] Adding an output layer after conv87
[info] Adding an output layer after conv90
[info] Adding an output layer after conv102
[info] Adding an output layer after conv105
[info] Building optimization options for network layers…
[info] Successfully built optimization options - 51s 2ms
[error] Mapping Failed (allocation time: 51s)
Performance Flow requires automatic resource utilization[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: Performance Flow requires automatic resource utilization
There is a lot of confusion in this. The calib path has only images,but no ground truths. How will the hailo mz run the quantization porcess and compile the model without the ground truths?
I request @HAILO Please let me know the correct and the exact way to compile a custom trained yolo11 and if possible for yolo26 models , along with the best parameters for the model to work with the best accuracy possible. Lets assume that there will be only one model running at a time on the Hailo chip.
Thank you.