Hailo Community
How to apply 16-bits quantization to all the convolution layers in the model
Guides
optimization
victorc
August 26, 2024, 3:54pm
1
You can use:
‘quantization_param({conv*}, precision_mode=a16_w16)\n’,
in the alls file.
Related Topics
Topic
Replies
Views
Activity
Precision mode conflict error
General
dfc
3
18
August 26, 2024
Can I run inference on a model that was quantized to have fully 16-bit weights?
General
python
,
cpp
0
139
April 9, 2024
16-bit quantization on final layers
General
raspberry-pi
1
38
August 11, 2024
Accuracy degradation after quantization for Hailo HW
General
network
1
174
March 1, 2024
Dataflow compiler best practice
General
dfc
2
397
June 22, 2024