Model training GPU

Hello

I trained yolov8(detection) custom dataset by GPU(RTX4070), and export onnx format (opset=11)

but that model didn’t compile in virtual environment..

i checked github (hailo_model_zoo/training/yolov8 at master · hailo-ai/hailo_model_zoo · GitHub)

Hailo support only CPU trained model?

If not, how to compile GPU trained model?

Welcome to the Hailo Community!

No, you can train a model using the GPU. The page you linked shows how the retraining docker is created with GPU support (see --gpus all parameter).

What is the issue/error you see?

I recommend to use the Hailo AI Software Suite Docker. You can use the hailomz command as show on the page you linked. Or you can write your own model conversion script if you want to add additional processing or validation code.

Thank you for your reply

i met error

hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8n/conv63 due to unsupported required slope. Desired shift is 12.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training.

Model is trained by gpu, and i use this command

hailomz compile --ckpt hailomz compile --ckpt /hailo_compile/BK/YOLOv8/Hailo/250404_ver_0.0.1_Seoyeon_yolov8_640/weights/best.onnx --calib-path /hailo_compile/BK/YOLOv8/data/Seoyeon_demo/images/augmented_images --yaml /hailo_compile/BK/YOLOv8/data/Seoyeon_demo/seoyeon_hailo.yaml --classes 2

the model"best.onnx" is i mentioned before

When run command, the “.alls” file like this

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess("../../postprocess_config/yolov8n_nms_config.json", meta_arch=yolov8, engine=cpu)

I searched ChatGPT, the likely cause was the quantization part of normalization and conv63, but the error message remained the same even when the proposed method was applied to that part

Please have a look at the following post with the same issue and an answer from my colleague that solved it.

Hailo Community - Problem with model optimization

Please let us know whether this worked for you as well.

Thank you for link

I didn’t fully understand it, but I replaced the existing sigmoid method of quantization_param ([conv42, conv53, conv63], force_range_out=[0.0, 1.0]), so the model trained by GPU was made into a hef file..!

And i try .hef file in Raspberry pi5(hailo8), it worked

Below, I will write down before and after the change of the existing .alls file for others to refer to

Spec

  1. model : yolov8n
  2. hailo version : 4.20
  3. env : Docker images and create venv
  4. inference test raspberry : Raspberry Pi5, Hailo8

Before

(it worked cpu trained models, but didn’t work gpu trained models)

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess("../../postprocess_config/yolov8n_nms_config.json", meta_arch=yolov8, engine=cpu)

After

(it worked gpu trained models)

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])
nms_postprocess("../../postprocess_config/yolov8n_nms_config.json", meta_arch=yolov8, engine=cpu)