Compilation of 1920 imgsz model

Im running raspberry5 +hailo8l hat
Currently experimenting with different models to detect objects from a long distance
Retrained yolov8m model on 1 object on image size 1920x1920
And then export it to .onnx with the same input image size
But got an issue with compiling this model
yolo detect train data=/dataset.yaml model=yolov8m.pt epochs=300 imgsz=1920 batch=4 pretrained=False
yolo export model=\best.pt imgsz=1920 format=onnx opset=11
hailomz compile yolov8m --ckpt /best.onnx --calib-path /calibration_images/ --classes 1 --hw-arch hailo8l

So i have a question can raspberry with hat8l handle realtime detection video in high resolution? If yes pls help me with this error

Preparing calibration data…
[info] Loading model script commands to yolov8m from /home/pi/nn/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8m.alls
[info] Loading model script commands to yolov8m from string
Traceback (most recent call last):
File “/home/pi/.local/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 219, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 73, in _ensure_optimized
optimize_model(runner, calib_feed_callback, logger, network_info, args.results_dir, model_script, args.resize,
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 294, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/pi/nn/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 289, in optimize_full_precision_model
runner.optimize_full_precision(calib_data=calib_feed_callback)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1597, in optimize_full_precision
self._optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1600, in _optimize_full_precision
self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1284, in optimize_full_precision
model, params = self._apply_model_modification_commands(model, params, update_model_and_params)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1200, in _apply_model_modification_commands
model, params = command.apply(model, params, hw_consts=self.hw_arch.consts)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 323, in apply
self._update_config_file(hailo_nn)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 456, in _update_config_file
self._update_config_layers(hailo_nn)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 498, in _update_config_layers
self._set_yolo_config_layers(hailo_nn)
File “/home/pi/.local/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 533, in _set_yolo_config_layers
raise AllocatorScriptParserException("Cannot infer bbox conv layers automatically. "
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Cannot infer bbox conv layers automatically. Please specify the bbox layer in the json configuration file

The Hailo-8L can run models with a high resolution input.

Setting aside the error for now. I am not sure training a network on larger resolution is the right approach for what you are trying to achieve. I assume the object will only cover a small area of the image. This means the network will need to learn to detect the same objects in different parts of the image and parts of the network will do the same thing.

Have a look at the Tilling examples in our Tappas repository. They will achieve the same long distance detection capability while keeping the network itself smaller.

GitHub - Hailo Tappas - Tiling

Without thinking through all the details I suspect the performance for the tilling approach should be higher especially with the single PCIe lane and the Hailo-8L having fewer compute clusters and therefore the model likely requiring multiple contexts. You can then use the --batch-size parameter to infer the tiles more efficiently reducing the context switching overhead.

Regarding the error, it looks like it is related to the NMS. I would first try to convert the model without NMS and check whether my assumption about the network size vs performance is true. For performance comparison you can use the hailortcli run command without real data and NMS.