I encountered problems when creating the calibration dataset.
This script for creating the calibration data set
import os
import numpy as np
import cv2
input_dir = r"C:\Users\AYAKA\calib" # 原始图像目录
output_file = r"C:\Users\AYAKA\calib\calib_set.npy" # 输出文件
target_size = (640, 640)
# 获取所有图像文件路径并排序
image_files = sorted([f for f in os.listdir(input_dir) if f.lower().endswith(('.png', '.jpg', '.jpeg'))])
# 初始化图像列表
image_list = []
# 读取并处理每一张图片
for img_name in image_files:
img_path = os.path.join(input_dir, img_name)
image = cv2.imread(img_path)
if image is None:
print(f"Skipping {img_name}, unable to read.")
continue
# 调整图像大小
image = cv2.resize(image, target_size, interpolation=cv2.INTER_AREA)
image = np.array(image, dtype=np.uint8)
image_list.append(image)
# 如果有有效的图像,保存它们到.npy文件
if len(image_list) > 0:
calib_array = np.stack(image_list, axis=0)
np.save(output_file, calib_array)
print(f"Saved {output_file} with shape {calib_array.shape}")
else:
print("No valid images found!")
I have converted the pt model of yolov8 to onnx.
Then this is the reason for the error.
pi@AYAKA:~$ hailo optimize yolov8n.har --hw-arch hailo8 --calib-set-path calib/ --output-har-path yolov8n_quantized_model.har
[warning] Cannot use graphviz, so no visualizations will be created
[info] For NMS architecture yolov8 the default engine is cpu. For other engine please use the 'engine' flag in the nms_postprocess model script command. If the NMS has been added during parsing, please parse the model again without confirming the addition of the NMS, and add the command manually with the desired engine.
[info] The layer yolov8n/conv41 was detected as reg_layer.
[info] The layer yolov8n/conv42 was detected as cls_layer.
[info] The layer yolov8n/conv52 was detected as reg_layer.
[info] The layer yolov8n/conv53 was detected as cls_layer.
[info] The layer yolov8n/conv62 was detected as reg_layer.
[info] The layer yolov8n/conv63 was detected as cls_layer.
[info] Using the default score threshold of 0.001 (range is [0-1], where 1 performs maximum suppression) and IoU threshold of 0.7 (range is [0-1], where 0 performs maximum suppression).
Changing the values is possible using the nms_postprocess model script command.
[info] The activation function of layer yolov8n/conv42 was replaced by a Sigmoid
[info] The activation function of layer yolov8n/conv53 was replaced by a Sigmoid
[info] The activation function of layer yolov8n/conv63 was replaced by a Sigmoid
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's less data than the recommended amount (1024), and there's no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:01.46)
[warning] Optimize Softmax Bias: Dataset didn't have enough data for sample_size of 4 Quantizing using calibration size of 1
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
Traceback (most recent call last):
File "/home/pi/.local/bin/hailo", line 8, in <module>
sys.exit(main())
File "/home/pi/.local/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 116, in main
return a.run()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 64, in run
ret_val = self._run(argv)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_platform/tools/hailocli/main.py", line 111, in _run
return args.func(args)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_client/tools/optimize_cli.py", line 120, in run
self._runner.optimize(dataset, work_dir=args.work_dir)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
return func(self, *args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 2128, in optimize
self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
return func(self, *args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1970, in _optimize
self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1125, in full_quantization
self._full_acceleras_run(self.calibration_data, data_type)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1319, in _full_acceleras_run
optimization_flow.run()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 306, in wrapper
return func(self, *args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 335, in run
step_func()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
result = method(*args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py", line 124, in parent_wrapper
func(self, *args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 353, in step1
self.pre_quantization_optimization()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
result = method(*args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 400, in pre_quantization_optimization
self._collect_stats()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
result = method(*args, **kwargs)
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 475, in _collect_stats
stats_collector.run()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/algorithms/optimization_algorithm.py", line 54, in run
return super().run()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/algorithms/algorithm_base.py", line 145, in run
self._setup()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/algorithms/stats_collection/stats_collection.py", line 76, in _setup
self.validate_shapes()
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/algorithms/stats_collection/stats_collection.py", line 86, in validate_shapes
layer.validate_shape(input_data[key])
File "/home/pi/.local/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_io.py", line 199, in validate_shape
raise BadInputsShape(self.full_name, input_shape, data_shape)
hailo_model_optimization.acceleras.utils.acceleras_exceptions.BadInputsShape: Data shape (160, 640, 640, 3) for layer yolov8n/input_layer1 doesn't match network's input shape (640, 640, 3)
I have modified the dataset code several times, but still cannot input multiple batches. I hope you can help me figure out where the problem lies.