Problem With Model Optimization

I thought this line in the alls file is responsible for normalization:

model_script_commands = [
‘normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])’]

Please correct me if I am wrong

Correct this is the line that make the normalization

1 Like

so how can we modify it to get rid of this error?

1 Like

I understood that the difference between input an output exceeds the number range…if that’s correct. But the networks seem valid…so the difference. I would interpret the difference as a “clear desicion” of that node/path.

I don’t really know what to do with the message of the compiler. The input data was normalized, so the calibration data ba hailomz. Batch normalization is used by default in yolov8.

@Nadav, do you have any indication of what to do to get the model compiled?

Many thanks!

I might have a solution, but I’m not sure if it’s general, but I think that it worth checking.
It seems that on datasets that have low number of classes and/or low number of pictures, some nodes on the YOLOv8 are getting almost nullified.
I’ve used this command to force a wider dynamic range on the outputs:
quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])
This is relevant for YOLOV8s trained from our retraining dockers. These are the end nodes before the NMS layer.

3 Likes

Many thanks for your response!

I’ve checked for the Yolov8n model that I’ve trained with your retrain docker (seems to have the same end nodes) and can confirm that it worked for me!

I haven’t tested the performance though, but compiling worked.

(I will search the doc to see if there is an explanation of what the force_range_out actually does :slight_smile: )

where did you use this command line ’ quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])’
Want to replicate your steps.
Thank you

I’ve added it to the Yolo alls file. You can get the path when starting to compile. There is a line that outputs which alls file is used.

In the docker container it is

/local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/<YoloVersionSize>.alls

But I am have issues running the resulting file. I get the message that the hef file is corrupted. I wonder if that is because of the version bump from 4.17 to 4.18. But I need to look more into that. Would be interesting though if you have the same issue after compiling.

1 Like

Modified line for my case to quantization_param([conv63], force_range_out=[0.0, 1.0]) because conv42, conv53 didnt cause an issue in my model. Compiled successfully but if the .hef file can be run properly will say on monday

Yes… if the HEF was compiled on a more recent version (3.28) than the one compatible for the runtime you could get this. We will improve the error mesaage on that

1 Like

I can confirm, this solved the same issue I had with a yolov8s model that I trained with the Hailo Model Zoo retraining docker.

Just for information, my model had 30 classes and was trained with 1000 images over 100 epochs.

2 Likes

This is what i did to th below file
/home/abc/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8s.alls

quantization_param([conv42, conv53, conv63], force_range_out=[0.0, 1.0])
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess(“…/…/postprocess_config/yolov8s_nms_config.json”, meta_arch=yolov8, engine=cpu)
after doing this it compiled the model need to test it on pi5
Thanks for the help .Will post if the model works on pi5

1 Like

Ran the Command after transfereing it from the server
python basic_pipelines/detection.py --labels-json resources/mf-labels.json --hef resources/yolov8s.hef --input /dev/video0

got this error
hailomuxer name=hmux v4l2src device=/dev/video0 name=src_0 ! video/x-raw, width=640, height=480, framerate=30/1 ! queue name=queue_scale max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoscale n-threads=2 ! queue name=queue_src_convert max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 name=src_convert qos=false ! video/x-raw, format=RGB, width=640, height=640, pixel-aspect-ratio=1/1 ! tee name=t ! queue name=bypass_queue max-size-buffers=20 max-size-bytes=0 max-size-time=0 ! hmux.sink_0 t. ! queue name=queue_hailonet max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 ! hailonet hef-path=resources/yolov8s.hef batch-size=2 nms-score-threshold=0.3 nms-iou-threshold=0.45 output-format-type=HAILO_FORMAT_TYPE_FLOAT32 force-writable=true ! queue name=queue_hailofilter max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailofilter so-path=/home/armtronix/hailo-rpi5-examples/basic_pipelines/…/resources/libyolo_hailortpp_post.so config-path=resources/mf-labels.json qos=false ! queue name=queue_hmuc max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hmux.sink_1 hmux. ! queue name=queue_hailo_python max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! queue name=queue_user_callback max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! identity name=identity_callback ! queue name=queue_hailooverlay max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailooverlay ! queue name=queue_videoconvert max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 qos=false ! queue name=queue_hailo_display max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=False signal-fps-measurements=true
[HailoRT] [error] CHECK failed - HEF file length does not match
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
[HailoRT] [error] Failed parsing HEF file
[HailoRT] [error] Failed creating HEF
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
CHECK_EXPECTED_AS_STATUS failed with status=26
Segmentation fault

Was able to run the model after downgrading Dataflow Compiler 3.27

1 Like

I can also confirm that it works wonderfully! Downgraded the Dataflow Compiler and inferencing speed on Raspi5 with Yolov8n is down to about 20ms with batch-size 1 without any optimization. That is an amazing speed improvement!

3 Likes

This also works for me on my yolov8m model.

1 Like

How Can I check the version compatibility between DFC and HailoRT?

Hey @mmmsk,

The DFC version 3.28 is compatible with HailoRT version 4.18, while DFC version 3.27 works with HailoRT version 4.17.

If you have these versions installed, you can check them by running the command hailo --version, which will display the currently installed versions of HailoRT and DFC.

@omria
Thanks for the answer.
I’m using DFC 3.27 for compile and HailoRT 4.18 for inference.
Are these versions compatible? In my environment, I have independent versions of DFC and HailoRT environments.

Hi @mmmsk,
If you’ll check in the developer-zone, documentation for the SW suite you’ll find this table to compatability:

1 Like