Thanks for your reply.
I read these documents, but I’m still confused.
To be clarify, my model is yolov8n, but I used 3 classes, and other input version (1024x1024), and this model was trained on my custom dataset.
So how can I convert it from onnx to hef ? I also need to do it with int16
After reading these docs, I get more questions:
start_node_names (list of str, optional): Name of the first ONNX node to parse.
end_node_names (list of str, optional): List of ONNX nodes, that the parsing can stop after all of them are parsed.
I understand that I can get these nodes names from my onnx file, but hm… in yolov8n I have:
Input [image, batch x channels x width x height] →
Node name: /model.0/conv/Conv
Input(s): images, model.0.conv.weight, model.0.conv.bias
Output(s): /model.0/conv/Conv_output_0
Op type: Conv
…
…
…
some convs, concat, split … etc
at the end:
…
Node name: /model.22/Div_1
Input(s): /model.22/Add_2_output_0, /model.22/Constant_11_output_0
Output(s): /model.22/Div_1_output_0
Op type: Div
Node name: /model.22/Sub_1
Input(s): /model.22/Add_1_output_0, /model.22/Sub_output_0
Output(s): /model.22/Sub_1_output_0
Op type: Sub
Node name: /model.22/Concat_4
Input(s): /model.22/Div_1_output_0, /model.22/Sub_1_output_0
Output(s): /model.22/Concat_4_output_0
Op type: Concat
Node name: /model.22/Constant_12
Input(s):
Output(s): /model.22/Constant_12_output_0
Op type: Constant
Node name: /model.22/Mul_2
Input(s): /model.22/Concat_4_output_0, /model.22/Constant_12_output_0
Output(s): /model.22/Mul_2_output_0
Op type: Mul
Node name: /model.22/Sigmoid
Input(s): /model.22/Split_output_1
Output(s): /model.22/Sigmoid_output_0
Op type: Sigmoid
Node name: /model.22/Concat_5
Input(s): /model.22/Mul_2_output_0, /model.22/Sigmoid_output_0
Output(s): output0
Op type: Concat
So.. i find some instructitons when is describes that i should use configurations file ".yaml:
what is it???
anchors:
scale_factors:
- 0.5
- 0.5
regression_length: 15
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess("../../postprocess_config/yolov8s_nms_config.json", meta_arch=yolov8, engine=cpu)
performance_param(optimize_for_power=True)
{
"nms_scores_th": 0.2,
"nms_iou_th": 0.7,
"image_dims": [
640,
640
],
"max_proposals_per_class": 100,
"classes": 80,
"regression_length": 16,
"background_removal": false,
"bbox_decoders": [
{
"name": "yolov8n/bbox_decoder41",
"stride": 8,
"reg_layer": "yolov8n/conv41",
"cls_layer": "yolov8n/conv42"
},
{
"name": "yolov8n/bbox_decoder52",
This file has been truncated. show original
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess(“../../postprocess_config/yolov8s_nms_config.json”, meta_arch=yolov8, engine=cpu)
Why is the normalization and output changing?
I saw on the graph after converting from onnx to har that my architecture is changing and I don’t know why.
Why do I have to use images from my dataset when using the zoo model?
.. _Datasets:
Datasets
========
| The Hailo Model Zoo works with TFRecord files which store the images and labels of the dataset for evaluation and calibration.
| The instructions on how to create the TFRecord files are given below. By default, datasets are stored in the following path:
.. code-block::
~/.hailomz
It is recommended that the user defines the data directory path by setting the ``HMZ_DATA`` environment variable.
.. code-block::
export HMZ_DATA=/new/path/for/storage/
* `Datasets`_
This file has been truncated. show original
Should I prepare the data in the same format as in the examples? Labels, images, folder tracking / val
And how to use config files?
.. _yaml_description:
Ha * | **normaliz * | **padding_color** *(['integer', ' * | **bgr2rgb** *(boolean)*\ - ``evaluation`` and ``postprocessing`` properties aren't needed for compilation as they are used by the Model Zoo for model evaluation
(which isn't supported yet for retrained models). Also the ``info`` field is just used for description.
- Only in the YOLOv4 family, the ``evaluation.classes`` and ``postprocessing.anchors.sizes`` fields are used for compilation,
that's these values should be updated even if just for compilation of the BGR to RGB layer.
| In some training frameworks, the models are trained on BGR inputs. When we want to feed RGB images to the network (whether in the MZ or in the user application),
we need to transform the images from RGB to BGR. The MZ automatically inserts this layer into the on-chip model.
| We have already set the "bgr2rgb" flag in the yaml files that correspond to the relevant retraining dockers. Default: ``False``.'])*\ : In the training environments, the input images to the model have used this color to indicate "padding" around resized images. Default: ``114`` for YOLO architectures, ``0`` for others._in_net** *(boolean)*\ : Whether or not the network includes an on-chip normalization layer. If so, the normalization layer will appear in the .alls file that is used. Default: ``False``.
| Example alls command: ``normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])``
| If the alls doesn't include the required normalization, then the MZ (and the user application) will apply normalization before feeding inputs to the network Model Zoo YAML Description
================================
Properties
----------
* | **network**
This file has been truncated. show original
…there is too much information out there, making me a little lost and I don’t know how to convert onnx to hef correctly