Hailo Model Zoo - Failing to compile a DamoYolo

Hello !

I’m very (VERY) new to Hailo and I’ve been trying to use the mz to compile a simple damoyolo detection model to benchmark. I followed the user guide but I must be doing something wrong because it fails whatever the parameters/adjustments I’m trying to make.

I tried 2 different models (damoyolo_tinynasL25_S and damoyolo_tinynasL35_M), either by using the onnx’s reference in .hailomz/ directory (by not specifying an .onnx but only a model name), or by getting a .onnx and using --ckpt.

I tried directly with the model zoo (I’m sorry for the prints() everywhere in-between. I tried to debug it myself to see where exactly was the issue but I didn’t manage to find much apart from a version “unknown” during the metadata load) :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz compile damoyolo_tinynasL25_S --calib-path ~/rendu/COCO/val2017_resized/
<Hailo Model Zoo INFO> Start run for network damoyolo_tinynasL25_S ...
<Hailo Model Zoo INFO> Initializing the hailo8 runner...
----------parse_model()------------
model_name :  damoyolo_tinynasL25_S
input_framework :  onnx
model_path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL25_S/pretrained/2022-12-19/damoyolo_tinynasL25_S.onnx
tensor_shapes :  None
start_node_names :  None
end_node_names :  ['Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339']
------NetParser.run()--------
------NetParser.run()----1----
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL25_S/pretrained/2022-12-19/damoyolo_tinynasL25_S.onnx
self._start_nodes :  None
self._end_nodes :  ['Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339']
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '848']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL25_S
[info] Restored ONNX model damoyolo_tinynasL25_S (completion time: 00:00:00.18)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:01.48)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL25_S/input_layer1'.
[info] End nodes mapped from original model: 'Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339'.
[info] Translation completed on ONNX model damoyolo_tinynasL25_S (completion time: 00:00:01.91)
Done translating model
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to damoyolo_tinynasL25_S from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/damoyolo_tinynasL25_S.alls
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[warning] Reducing compression ratio to 0 because the number of parameters in the network is not large enough (16M and need at least 20M). Can be enforced using model_optimization_config(compression_params, auto_4bit_weights_ratio=0.200)
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.51)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:20<00:00,  3.14entries/s]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:00:21.26)
[info] Output layer damoyolo_tinynasL25_S/conv60 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [4.871636974712601e-06, 0.7469313740730286]).
[info] Output layer damoyolo_tinynasL25_S/conv72 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [1.6503779534104979e-06, 0.8708891272544861]).
[info] Output layer damoyolo_tinynasL25_S/conv83 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [2.5452700356254354e-06, 0.935403048992157]).
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
[info] Loading model script commands to damoyolo_tinynasL25_S from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/damoyolo_tinynasL25_S.alls
[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Running Auto-Merger
[info] Auto-Merger is done
compiler: ../src/allocator/context_split.cpp:51: static std::shared_ptr<allocator::ContextSplit> allocator::ContextSplit::Create(std::shared_ptr<utils::GraphInterface<network_graph::NetworkNode, network_graph::NetworkEdge> >, const NodesByContextMap&, const allocator::ContextPlacements&): Assertion `validate_context_placements(*result)' failed.

[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed with unexpected crash

I also tried to play with the .yaml files for both models, putting the (supposed) correct values for start_node and end_note, but it didn’t really change much. Just crashed later down the road.
Here with “images” and [“output”, ‘1171’] as values in the hailo_model_zoo/cfg/networks/damoyolo_tinynasL35_M.yaml for start/end nodes :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz compile damoyolo_tinynasL35_M --calib-path ~/rendu/COCO/val2017_resized/
<Hailo Model Zoo INFO> Start run for network damoyolo_tinynasL35_M ...
<Hailo Model Zoo INFO> Initializing the hailo8 runner...
----------parse_model()------------
model_name :  damoyolo_tinynasL35_M
input_framework :  onnx
model_path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL35_M/pretrained/2022-12-19/damoyolo_tinynasL35_M.onnx
tensor_shapes :  None
start_node_names :  images
end_node_names :  ['output', '1171']
------NetParser.run()--------
------NetParser.run()----1----
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL35_M/pretrained/2022-12-19/damoyolo_tinynasL35_M.onnx
self._start_nodes :  images
self._end_nodes :  ['output', '1171']
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '1171']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL35_M
[info] Restored ONNX model damoyolo_tinynasL35_M (completion time: 00:00:00.34)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.67)
[info] Unable to simplify the model: Parsing failed. The errors found in the graph are:
 UnsupportedShuffleLayerError in op Reshape_533: Failed to determine type of layer to create in node Reshape_533
 UnsupportedShuffleLayerError in op Reshape_538: Failed to determine type of layer to create in node Reshape_538
Please try to parse the model again, using these end node names: Slice_520, Concat_521
[info] According to recommendations, retrying parsing with end node names: ['Slice_520', 'Concat_521'].
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL35_M/pretrained/2022-12-19/damoyolo_tinynasL35_M.onnx
self._start_nodes :  images
self._end_nodes :  ['Slice_520', 'Concat_521']
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '1171']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL35_M
[info] Restored ONNX model damoyolo_tinynasL35_M (completion time: 00:00:00.26)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.46)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL35_M/input_layer1'.
[info] End nodes mapped from original model: 'Slice_520', 'Concat_521'.
[info] Translation completed on ONNX model damoyolo_tinynasL35_M (completion time: 00:00:03.40)
Done translating model
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL35_M.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to damoyolo_tinynasL35_M from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/damoyolo_tinynasL35_M.alls
[info] Starting Model Optimization
[warning] line 2: layers ['damoyolo_tinynasL35_M/output_layer3', 'damoyolo_tinynasL35_M/output_layer4', 'damoyolo_tinynasL35_M/output_layer5', 'damoyolo_tinynasL35_M/output_layer6'] could not be found in scope.
Traceback (most recent call last):
  File "/home/p20c02/.venv_hailo/bin/hailomz", line 33, in <module>
    sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
  File "/home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/main.py", line 122, in main
    run(args)
  File "/home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/main.py", line 111, in run
    return handlers[args.command](args)
  File "/home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 250, in compile
    runner = _ensure_optimized(runner, logger, args, network_info)
  File "/home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 91, in _ensure_optimized
    optimize_model(
  File "/home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 360, in optimize_model
    runner.optimize(calib_feed_callback)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 2130, in optimize
    self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1972, in _optimize
    self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1122, in full_quantization
    self.setup_quantization(self.calibration_data, data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1044, in setup_quantization
    config = self.apply_quantization_script(data, data_type)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 991, in apply_quantization_script
    return self._apply_quantization_script_with_flavor()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1009, in _apply_quantization_script_with_flavor
    config = verify_commands(self._model, mo_commands, flavor_config)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/hailo_nn/model_optimization/configuration_verifier.py", line 65, in verify_commands
    verifier = ModelConfigurationValidator(hn_model, commands, allocation_mode, pre_quantization_mode)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/hailo_nn/model_optimization/configuration_verifier.py", line 120, in __init__
    self._config = ModelOptimizationConfig(**commands)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ModelOptimizationConfig
finetune -> __root__
  ; loss_factors length must match loss_layer_names length; policy from line 2, batch_size from line 2, loss_layer_names from line 2, loss_factors from line 2; command was post_quantization_optimization(finetune,policy=enabled,batch_size=4,loss_layer_names=[conv81,conv95,conv109,output_layer1,output_layer2,output_layer3,output_layer4,output_layer5,output_layer6],loss_factors=[1,1,1,2,2,2,2,2,2]) (type=value_error)

I also tried directly with the hailo commands (parser, optimize and quantify) but it’s giving mixed results :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailo parser onnx damoyolo_tinynasL25_S.onnx 
------NetParser.run()--------
------NetParser.run()----1----
Translating onnx model
model path :  damoyolo_tinynasL25_S.onnx
self._start_nodes :  None
self._end_nodes :  None
tensor_shapes :  None
✅ Real Start Node Names : input
✅ Real End Node Names   : ['boxes', 'scores', 'classes']
✅ Real Net Input Shapes : {'input': [0, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL25_S
[info] Restored ONNX model damoyolo_tinynasL25_S (completion time: 00:00:00.19)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:01.93)
[info] Unable to simplify the model: Parsing failed. The errors found in the graph are:
 UnsupportedLogitsLayerError in op /1/ArgMax: ArgMax layer /1/ArgMax has unsupported axis 2.
 UnsupportedShuffleLayerError in op /0/head/Transpose_1: Failed to determine type of layer to create in node /0/head/Transpose_1
 UnsupportedShuffleLayerError in op /0/head/integral/Reshape: Failed to determine type of layer to create in node /0/head/integral/Reshape
 UnsupportedShuffleLayerError in op /0/head/integral/Reshape_1: Failed to determine type of layer to create in node /0/head/integral/Reshape_1
 UnsupportedShuffleLayerError in op /0/head/Transpose_5: Failed to determine type of layer to create in node /0/head/Transpose_5
 UnsupportedShuffleLayerError in op /0/head/Transpose_9: Failed to determine type of layer to create in node /0/head/Transpose_9
Please try to parse the model again, using these end node names: /1/ReduceMax, /0/head/Concat_32, /0/head/Unsqueeze_54, /0/head/Concat_36, /0/head/Concat_40, /0/head/Gather_24, /0/head/Softmax_2, /0/head/Gather_22, /0/head/Slice_12, /0/head/Softmax_1, /0/head/Softmax
Parsing failed with recommendations for end node names: ['/1/ReduceMax', '/0/head/Concat_32', '/0/head/Unsqueeze_54', '/0/head/Concat_36', '/0/head/Concat_40', '/0/head/Gather_24', '/0/head/Softmax_2', '/0/head/Gather_22', '/0/head/Slice_12', '/0/head/Softmax_1', '/0/head/Softmax'].
Would you like to parse again with the recommendation? (y/n) 
y
[info] According to recommendations, retrying parsing with end node names: ['/1/ReduceMax', '/0/head/Concat_32', '/0/head/Unsqueeze_54', '/0/head/Concat_36', '/0/head/Concat_40', '/0/head/Gather_24', '/0/head/Softmax_2', '/0/head/Gather_22', '/0/head/Slice_12', '/0/head/Softmax_1', '/0/head/Softmax'].
Translating onnx model
model path :  damoyolo_tinynasL25_S.onnx
self._start_nodes :  None
self._end_nodes :  ['/1/ReduceMax', '/0/head/Concat_32', '/0/head/Unsqueeze_54', '/0/head/Concat_36', '/0/head/Concat_40', '/0/head/Gather_24', '/0/head/Softmax_2', '/0/head/Gather_22', '/0/head/Slice_12', '/0/head/Softmax_1', '/0/head/Softmax']
tensor_shapes :  None
✅ Real Start Node Names : input
✅ Real End Node Names   : ['boxes', 'scores', 'classes']
✅ Real Net Input Shapes : {'input': [0, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL25_S
[info] Restored ONNX model damoyolo_tinynasL25_S (completion time: 00:00:00.14)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.10)
[info] Unable to simplify the model: The original node name /0/head/Concat_32 in end_node_names is missing in the HN.
Traceback (most recent call last):
  File "/home/p20c02/.venv_hailo/bin/hailo", line 8, in <module>
    sys.exit(main())
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 116, in main
    return a.run()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 64, in run
    ret_val = self._run(argv)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 111, in _run
    return args.func(args)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/tools/parser_cli.py", line 229, in run
    return self._handle_recommendation_exception(err, args, net_name, tensor_shapes, command, save_model)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/tools/parser_cli.py", line 246, in _handle_recommendation_exception
    runner = self._parse(net_name, args, tensor_shapes)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/tools/parser_cli.py", line 317, in _parse
    runner.translate_onnx_model(
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1177, in translate_onnx_model
    parser.translate_onnx_model(
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py", line 262, in translate_onnx_model
    raise e from None
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py", line 239, in translate_onnx_model
    parsing_results = self._parse_onnx_model_to_hn(
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py", line 320, in _parse_onnx_model_to_hn
    return self.parse_model_to_hn(
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/parser/parser.py", line 372, in parse_model_to_hn
    hailo_nn = fuser.convert_model()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/model_translator/fuser/fuser.py", line 112, in convert_model
    self._finalize_fused_model()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/model_translator/fuser/fuser.py", line 473, in _finalize_fused_model
    self._output_graph.update_output_layers_order(self._end_node_names)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/hailo_nn/hailo_nn.py", line 917, in update_output_layers_order
    raise InvalidHNError(
hailo_sdk_common.hailo_nn.exceptions.InvalidHNError: The original node name /0/head/Concat_32 in end_node_names is missing in the HN.

And with the L35_M it clears the parser (but recommending some random end node names for some reason) but crashes at the optimize :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailo parser onnx damoyolo_tinynasL35_M.onnx 
------NetParser.run()--------
------NetParser.run()----1----
Translating onnx model
model path :  damoyolo_tinynasL35_M.onnx
self._start_nodes :  None
self._end_nodes :  None
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '1171']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL35_M
[info] Restored ONNX model damoyolo_tinynasL35_M (completion time: 00:00:00.34)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.67)
[info] Unable to simplify the model: Parsing failed. The errors found in the graph are:
 UnsupportedShuffleLayerError in op Reshape_533: Failed to determine type of layer to create in node Reshape_533
 UnsupportedShuffleLayerError in op Reshape_538: Failed to determine type of layer to create in node Reshape_538
Please try to parse the model again, using these end node names: Concat_521, Slice_520
Parsing failed with recommendations for end node names: ['Concat_521', 'Slice_520'].
Would you like to parse again with the recommendation? (y/n) 
y
[info] According to recommendations, retrying parsing with end node names: ['Concat_521', 'Slice_520'].
Translating onnx model
model path :  damoyolo_tinynasL35_M.onnx
self._start_nodes :  None
self._end_nodes :  ['Concat_521', 'Slice_520']
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '1171']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model damoyolo_tinynasL35_M
[info] Restored ONNX model damoyolo_tinynasL35_M (completion time: 00:00:00.26)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.48)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL35_M/input_layer1'.
[info] End nodes mapped from original model: 'Concat_521', 'Slice_520'.
[info] Translation completed on ONNX model damoyolo_tinynasL35_M (completion time: 00:00:03.40)
[warning] hw_arch parameter not given, using the default hw_arch hailo8.
If another device is the target, please run again using one of hailo8, hailo8r, hailo8l, hailo15h, hailo15m, hailo15l, hailo10h
Done translating model
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL35_M.har
(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailo optimize --calib-set-path ~/rendu/COCO/val2017_resized/ --model-script /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/damoyolo_tinynasL35_M.alls damoyolo_tinynasL35_M.har 
------ClientRunner.load_har()-------
damoyolo_tinynasL35_M.har
---------HailoArchiveLoader._init()-----------
path :  damoyolo_tinynasL35_M.har
_set_metadata()
['damoyolo_tinynasL35_M.hn', 'damoyolo_tinynasL35_M.npz', 'damoyolo_tinynasL35_M.original_model_meta.json', 'damoyolo_tinynasL35_M.postprocess.onnx', 'damoyolo_tinynasL35_M.metadata.json']
filename :  ['damoyolo_tinynasL35_M.metadata.json']
------metadata----------
{'state': 'hailo_model', 'force_weightless_model': False, 'model_name': 'damoyolo_tinynasL35_M', 'sdk_version': '3.30.0', 'hw_arch': 'hailo8', 'hn': 'damoyolo_tinynasL35_M.hn', 'params': 'damoyolo_tinynasL35_M.npz', 'original_model_meta': 'damoyolo_tinynasL35_M.original_model_meta.json', 'postprocess': 'damoyolo_tinynasL35_M.postprocess.onnx'}
----HailoArchive.load()-----
har_path :  damoyolo_tinynasL35_M.har
temp_dir :  /tmp/tmpvz7g0d6g
-------------HailoArchiveLoader.load()--------------
temp_dir :  /tmp/tmpvz7g0d6g
State :  hailo_model
model_name :  damoyolo_tinynasL35_M
original_model_path :  None
get_sdk_version()
--------3.30.0-------------
native_hn :  None
[info] Loading model script commands to damoyolo_tinynasL35_M from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/damoyolo_tinynasL35_M.alls
[info] Starting Model Optimization
[warning] line 2: layers ['damoyolo_tinynasL35_M/output_layer3', 'damoyolo_tinynasL35_M/output_layer4', 'damoyolo_tinynasL35_M/output_layer5', 'damoyolo_tinynasL35_M/output_layer6'] could not be found in scope.
Traceback (most recent call last):
  File "/home/p20c02/.venv_hailo/bin/hailo", line 8, in <module>
    sys.exit(main())
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 116, in main
    return a.run()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 64, in run
    ret_val = self._run(argv)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 111, in _run
    return args.func(args)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/tools/optimize_cli.py", line 120, in run
    self._runner.optimize(dataset, work_dir=args.work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 2130, in optimize
    self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1972, in _optimize
    self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1122, in full_quantization
    self.setup_quantization(self.calibration_data, data_type, work_dir=work_dir)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1044, in setup_quantization
    config = self.apply_quantization_script(data, data_type)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 991, in apply_quantization_script
    return self._apply_quantization_script_with_flavor()
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1009, in _apply_quantization_script_with_flavor
    config = verify_commands(self._model, mo_commands, flavor_config)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/hailo_nn/model_optimization/configuration_verifier.py", line 65, in verify_commands
    verifier = ModelConfigurationValidator(hn_model, commands, allocation_mode, pre_quantization_mode)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/hailo_sdk_common/hailo_nn/model_optimization/configuration_verifier.py", line 120, in __init__
    self._config = ModelOptimizationConfig(**commands)
  File "/home/p20c02/.venv_hailo/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ModelOptimizationConfig
finetune -> __root__
  ; loss_factors length must match loss_layer_names length; policy from line 2, batch_size from line 2, loss_layer_names from line 2, loss_factors from line 2; command was post_quantization_optimization(finetune,policy=enabled,batch_size=4,loss_layer_names=[conv81,conv95,conv109,output_layer1,output_layer2,output_layer3,output_layer4,output_layer5,output_layer6],loss_factors=[1,1,1,2,2,2,2,2,2]) (type=value_error)

At some point (I don’t really know how or why), by doing hailomz compile damoyolo_tinynasL35_M --calib-path ~/rendu/COCO/val2017/, I manage to generate a .hef file (when I’d like to have both S and M to compare) but it must have been a fluke because it was basically doing nothing, sitting at 0 everywhere :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz eval --hef damoyolo_tinynasL35_M.hef --target hardware --calib-path ~/rendu/COCO/val2017/ damoyolo_tinynasL35_M
<Hailo Model Zoo INFO> Start run for network damoyolo_tinynasL35_M ...
<Hailo Model Zoo INFO> Initializing the runner...
<Hailo Model Zoo INFO> Chosen target is hardware
[info] Translation started on ONNX model damoyolo_tinynasL35_M
[info] Restored ONNX model damoyolo_tinynasL35_M (completion time: 00:00:00.33)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.  (This may have returned Python scalars in past versions.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:02.62)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL35_M/input_layer1'.
[info] End nodes mapped from original model: 'Mul_396', 'Sigmoid_416', 'Mul_436', 'Sigmoid_456', 'Mul_476', 'Sigmoid_496'.
[info] Translation completed on ONNX model damoyolo_tinynasL35_M (completion time: 00:00:03.43)
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL35_M.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to damoyolo_tinynasL35_M from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/damoyolo_tinynasL35_M.alls
<Hailo Model Zoo INFO> Initializing the dataset ...
<Hailo Model Zoo INFO> Running inference...
Processed: 5000images [01:44, 47.94images/s]
creating index...
index created!
Loading and preparing results...
Converting ndarray to lists...
(355411, 7)
0/355411
DONE (t=1.25s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=4.29s).
Accumulating evaluation results...
DONE (t=0.78s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
<Hailo Model Zoo INFO> Done 5000 images AP=0.000 AP50=0.000

Sorry for the long paragraph. Thank you for your help and have a great day !

Here are the versions I’m working with :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ pip list | grep hailo
hailo-dataflow-compiler      3.30.0
hailo-model-zoo              2.14.0               /home/p20c02/rendu/hailo_model_zoo
hailo-tappas-dot-visualizer  3.31.0               /home/p20c02/rendu/tappas/tools/trace_analyzer/dot_visualizer
hailo-tappas-run-apps        3.31.0               /home/p20c02/rendu/tappas/tools/run_app
hailort                      4.20.0

Cordially,

Welcome to the Hailo Community!

Are you able to compile the Hailo Model Zoo version of the model? I just compiled both models successfully using the Hailo AI Software suite Docker.

hailomz compile --hw-arch hailo8 damoyolo_tinynasL35_M
hailomz compile --hw-arch hailo8 damoyolo_tinynasL25_S

I also run eval and got valid results.

hailomz eval --hef damoyolo_tinynasL35_M.hef --target hardware damoyolo_tinynasL35_M

Make sure you have the COCO dataset TFRecords.

GitHub - Hailo Model Zoo - DATA

If this does not work, I would recommend to try the Hailo AI Software suite Docker. That removes potential issues with dependencies.

This does not look right. These models have 6 outputs (3 Mul_nnn and 3 Sigmoid_nnn).

I do not get this in my output.

Note: If you have your own version of the model, not trained with the Hailo retraining Docker. The start- and end-node-names will likely be different and you will need to pick the right ones. You can use netron.app to compare the Model Zoo onnx with yours. Netron also supports HAR files.

https://netron.app/

Hello,

Thanks for your answer !

I reinstalled hailo_model_zoo and, this time, I manage to generate a .hef file consistently by using hailomz compile --hw-arch hailo8 damoyolo_tinynasL25_S and hailomz compile --hw-arch hailo8 damoyolo_tinynasL35_M.
I assume my issue was coming from my calib_dataset, somehow causing all the crashes ? (Not sure).

However, after generating the .hef files, they consistently stay at 0 performance everywhere on eval and I don’t really understand why. I’m using all the base params from the .yaml config files, .alls for optimization, .onnx in the pretrained folders and the tfrecord files from the .hailomz models_files/coco/2023-08-03/coco_val2017.tfrecord & models_files/coco/2023-08-03/coco_calib2017.tfrecord for quantization and eval.

Here is my compilation for damoyolo_tinynasL25_S (same results on L35_M) :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz compile --hw-arch hailo8 damoyolo_tinynasL25_S
<Hailo Model Zoo INFO> Start run for network damoyolo_tinynasL25_S ...
<Hailo Model Zoo INFO> Initializing the hailo8 runner...
------NetParser.run()--------
------NetParser.run()----1----
------NetParser.run()---2-----
------NetParser.run()---3-----
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL25_S/pretrained/2022-12-19/damoyolo_tinynasL25_S.onnx
self._start_nodes :  None
self._end_nodes :  ['Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339']
tensor_shapes :  None
[info] Translation started on ONNX model damoyolo_tinynasL25_S
[info] Restored ONNX model damoyolo_tinynasL25_S (completion time: 00:00:00.19)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:01.68)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL25_S/input_layer1'.
[info] End nodes mapped from original model: 'Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339'.
[info] Translation completed on ONNX model damoyolo_tinynasL25_S (completion time: 00:00:02.03)
Done translating model
------NetParser.run()---4-----
------NetParser.run()---5-----
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to damoyolo_tinynasL25_S from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/damoyolo_tinynasL25_S.alls
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[warning] Reducing compression ratio to 0 because the number of parameters in the network is not large enough (16M and need at least 20M). Can be enforced using model_optimization_config(compression_params, auto_4bit_weights_ratio=0.200)
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.36)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:20<00:00,  3.11entries/s]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:00:21.51)
[info] Output layer damoyolo_tinynasL25_S/conv60 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [2.7490600587043446e-06, 0.9074400067329407]).
[info] Output layer damoyolo_tinynasL25_S/conv72 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [7.37098616809817e-07, 0.898878276348114]).
[info] Output layer damoyolo_tinynasL25_S/conv83 with sigmoid activation was detected. Forcing its output range to be [0, 1] (original range was [6.07195772772684e-07, 0.9397953152656555]).
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
[info] Loading model script commands to damoyolo_tinynasL25_S from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/damoyolo_tinynasL25_S.alls
[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Running Auto-Merger
[info] Auto-Merger is done
[info] Running Auto-Merger
[info] Auto-Merger is done
[info] Using Single-context flow
[info] Resources optimization guidelines: Strategy -> GREEDY Objective -> MAX_FPS
[info] Resources optimization params: max_control_utilization=97,5%, max_compute_utilization=97,5%, max_compute_16bit_utilization=97,5%, max_memory_utilization (weights)=90%, max_input_aligner_utilization=97,5%, max_apu_utilization=97,5%
[info] Using Single-context flow
[info] Resources optimization guidelines: Strategy -> GREEDY Objective -> MAX_FPS
[info] Resources optimization params: max_control_utilization=97,5%, max_compute_utilization=97,5%, max_compute_16bit_utilization=97,5%, max_memory_utilization (weights)=90%, max_input_aligner_utilization=97,5%, max_apu_utilization=97,5%

Validating context_0 layer by layer (100%)

 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 
 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 
 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 
 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 
 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 
 +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  +  + 

● Finished                                                               

[info] Solving the allocation (Mapping), time per context: 59m 59s
Context:0/0 Iteration 28: Trying parallel splits...   
          cluster_0  cluster_1  cluster_2  cluster_3  cluster_4  cluster_5  cluster_6  cluster_7  prepost 
 worker0                                                                                                  
 worker1                                                                                                  
Context:0/0 Iteration 32: Trying parallel mapping...                                                      
          cluster_0  cluster_1  cluster_2  cluster_3  cluster_4  cluster_5  cluster_6  cluster_7  prepost 
 worker0  X          X          *          *          *          *          *          *          V       
 worker1  X          V          *          *          *          *          *          *          V       
 worker2  X          V          *          *          *          *          *          *          V       
 worker3  V          V          V          V          V          V          V          V          V       
Reverts on pre-mapping validation: 22
  01:40 on split failed: 0
Reverts on cluster mapping: 1
Reverts on inter-cluster connectivity: 2
Reverts on pre-mapping validation: 25
Reverts on split failed: 0

[info] Iterations: 32
Reverts on cluster mapping: 4
Reverts on inter-cluster connectivity: 2
Reverts on pre-mapping validation: 25
Reverts on split failed: 0
[info] +-----------+---------------------+---------------------+--------------------+
[info] | Cluster   | Control Utilization | Compute Utilization | Memory Utilization |
[info] +-----------+---------------------+---------------------+--------------------+
[info] | cluster_0 | 75%                 | 57,8%               | 85,9%              |
[info] | cluster_1 | 75%                 | 95,3%               | 80,5%              |
[info] | cluster_2 | 87,5%               | 100%                | 85,2%              |
[info] | cluster_3 | 93,8%               | 100%                | 56,3%              |
[info] | cluster_4 | 93,8%               | 82,8%               | 83,6%              |
[info] | cluster_5 | 100%                | 79,7%               | 75%                |
[info] | cluster_6 | 87,5%               | 78,1%               | 93%                |
[info] | cluster_7 | 68,8%               | 62,5%               | 89,8%              |
[info] +-----------+---------------------+---------------------+--------------------+
[info] | Total     | 85,2%               | 82%                 | 81,2%              |
[info] +-----------+---------------------+---------------------+--------------------+
[info] Successful Mapping (allocation time: 3m 30s)
[info] Compiling context_0...
[info] Bandwidth of model inputs: 9.375 Mbps, outputs: 9.54895 Mbps (for a single frame)
[info] Bandwidth of DDR buffers: 0.0 Mbps (for a single frame)
[info] Bandwidth of inter context tensors: 0.0 Mbps (for a single frame)
[info] Building HEF...
[info] Successful Compilation (compilation time: 9s)
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
<Hailo Model Zoo INFO> HEF file written to damoyolo_tinynasL25_S.hef

Eval :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz eval --hef damoyolo_tinynasL25_S.hef --target hardware damoyolo_tinynasL25_S
<Hailo Model Zoo INFO> Start run for network damoyolo_tinynasL25_S ...
<Hailo Model Zoo INFO> Initializing the runner...
<Hailo Model Zoo INFO> Chosen target is hardware
------NetParser.run()--------
------NetParser.run()----1----
------NetParser.run()---2-----
------NetParser.run()---3-----
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/damoyolo_tinynasL25_S/pretrained/2022-12-19/damoyolo_tinynasL25_S.onnx
self._start_nodes :  None
self._end_nodes :  ['Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339']
tensor_shapes :  None
[info] Translation started on ONNX model damoyolo_tinynasL25_S
[info] Restored ONNX model damoyolo_tinynasL25_S (completion time: 00:00:00.18)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:01.51)
[info] Start nodes mapped from original model: 'images': 'damoyolo_tinynasL25_S/input_layer1'.
[info] End nodes mapped from original model: 'Mul_239', 'Sigmoid_259', 'Mul_279', 'Sigmoid_299', 'Mul_319', 'Sigmoid_339'.
[info] Translation completed on ONNX model damoyolo_tinynasL25_S (completion time: 00:00:01.85)
Done translating model
------NetParser.run()---4-----
------NetParser.run()---5-----
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/damoyolo_tinynasL25_S.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to damoyolo_tinynasL25_S from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/damoyolo_tinynasL25_S.alls
<Hailo Model Zoo INFO> Initializing the dataset ...
<Hailo Model Zoo INFO> Running inference...
Processed: 5000images [00:42, 118.08images/s]
creating index...
index created!
Loading and preparing results...
Converting ndarray to lists...
(407311, 7)
0/407311
DONE (t=1.56s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=4.33s).
Accumulating evaluation results...
DONE (t=0.81s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
<Hailo Model Zoo INFO> Done 5000 images AP=0.000 AP50=0.000

Do you have an idea on what I’m doing wrong ?

Cordially,

Hi @breadcrumbs
Did you attach the appropriate postprocessor?

Hello !

I’m using the one defined in the damoyolo.yaml file in the cfg/base (nanodet_split).
Here is hailo_model_zoo/cfg/base/damoyolo.yaml :

base:
- base/coco.yaml
quantization:
  calib_set:
  - models_files/coco/2023-08-03/coco_calib2017.tfrecord
preprocessing:
  network_type: detection
  meta_arch: yolo_v5
  padding_color: 0
  centered: false
postprocessing:
  device_pre_post_layers:
    sigmoid: true
  nms_iou_thresh: 0.7
  score_threshold: 0.05
  meta_arch: nanodet_split
  anchors:
    scale_factors:
    - 0.0
    - 0.0
    regression_length: 16
    strides:
    - 8
    - 16
    - 32
info:
  source: https://github.com/tinyvision/DAMO-YOLO
  framework: pytorch
  training_data: coco train2017
  validation_data: coco val2017
  eval_metric: mAP
  license_url: https://github.com/tinyvision/DAMO-YOLO/blob/master/LICENSE
  license_name: Apache-2.0
parser:
  normalization_params:
    normalize_in_net: true
    mean_list:
    - 0.0
    - 0.0
    - 0.0
    std_list:
    - 1.0
    - 1.0
    - 1.0
evaluation:
  classes: 80
  dataset_name: coco_2017_detection

I just tried to compile/eval a yolov7_tiny and a yolox_tiny and it’s working correctly (as in producing results). The issue seems to be exclusively on the damoyolos I’m working with and I can’t figure out why :

(.venv_hailo) p20c02@p20c02:~/rendu/hailo_model_zoo$ hailomz eval --hef yolov7_tiny.hef --target hardware yolov7_tiny
<Hailo Model Zoo INFO> Start run for network yolov7_tiny ...
<Hailo Model Zoo INFO> Initializing the runner...
<Hailo Model Zoo INFO> Chosen target is hardware
------NetParser.run()--------
------NetParser.run()----1----
------NetParser.run()---2-----
------NetParser.run()---3-----
Translating onnx model
model path :  /home/p20c02/.hailomz/data/models_files/ObjectDetection/Detection-COCO/yolo/yolov7_tiny/pretrained/2023-04-25/yolov7_tiny.onnx
self._start_nodes :  None
self._end_nodes :  None
tensor_shapes :  None
✅ Real Start Node Names : images
✅ Real End Node Names   : ['output', '298', '318']
✅ Real Net Input Shapes : {'images': [1, 3, 640, 640]}
[info] Translation started on ONNX model yolov7_tiny
[info] Restored ONNX model yolov7_tiny (completion time: 00:00:00.06)
/home/p20c02/.venv_hailo/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py:73: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
  sizes = (None, np.float32, np.uint8, np.int8, np.uint16, np.int16, np.int32, np.int64, str, np.bool,
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.28)
[info] NMS structure of yolov5 (or equivalent architecture) was detected. Default values of NMS anchors were loaded to NMS config json
[info] Start nodes mapped from original model: 'images': 'yolov7_tiny/input_layer1'.
[info] End nodes mapped from original model: 'Transpose_149', 'Transpose_165', 'Transpose_181'.
[info] Translation completed on ONNX model yolov7_tiny (completion time: 00:00:00.61)
Done translating model
------NetParser.run()---4-----
[info] Appending model script commands to yolov7_tiny from string
[info] Added nms postprocess command to model script.
------NetParser.run()---5-----
[info] Saved HAR to: /home/p20c02/rendu/hailo_model_zoo/yolov7_tiny.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to yolov7_tiny from /home/p20c02/rendu/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov7_tiny.alls
<Hailo Model Zoo INFO> Initializing the dataset ...
<Hailo Model Zoo INFO> Running inference...
[info] Setting NMS score threshold to 0.001
Processed: 5000images [00:32, 155.72images/s]
creating index...
index created!
Loading and preparing results...
Converting ndarray to lists...
(500000, 7)
0/500000
DONE (t=1.90s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=16.33s).
Accumulating evaluation results...
DONE (t=3.80s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.357
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.545
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.383
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.175
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.398
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.503
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.297
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.484
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.516
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.293
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.573
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.699
<Hailo Model Zoo INFO> Done 5000 images AP=35.653 AP50=54.542

Cordially,

Hi @breadcrumbs
The postprocessing in Damo-YOLO is different from other architectures. You can see a reference implementation here: hailo_examples/postprocessors/DamoYOLO/HailoDetectorDamoYOLO.py at main · DeGirum/hailo_examples. We integrated Damo-YOLO into our PySDK. We can help you get your model working if you are interested.

Oh,I knew I was doing something wrong ! So I’m suppose to load my model with the Degirum SDK for the correct postprocessor to be used, specifically for damoyolo
Ok, thank you !

And yes, if you have advices on how to use Degirum to make something very simple that I could play with, benchmark and try/error with later, I’ll be thankful

Thank you very much ! I think it’s ok to close the ticket now

Hi @breadcrumbs
We will provide a working example in our repo for Damo-YOLO soon. You can see our examples and the user guides we posted on the forum to get an idea of what you can do with PySDK+Hailo. For example, here is a guide on building a face recognition pipleine: A Comprehensive Guide to Building a Face Recognition System

1 Like