I use hailort 4.20.0.
I try to prepare image data for infer_pipeline.infer()
function in python 3.10.
image_read_bgr = cv2.imread(image.png)
resized_bgr_image = cv2.resize(image_read_bgr, (608, 608))
image_resized_rgb = cv2.cvtColor(resized_bgr_image, cv2.COLOR_BGR2RGB)
image_resized_rgb_uint8 = image_resized_rgb.astype(np.uint8)
data_shape_nwhc = np.expand_dims(image_resized_rgb_uint8, axis=0)
data_shape_nwhc = data_shape_nwhc.reshape((1, h, w, c)) # nwhc
data_shape_nwhc_and_contiguous = np.ascontiguousarray(data_shape_nwhc)
when I call
results = infer_pipeline.infer({input_info.name: input_data})
then I get error:
[HailoRT] [error] CHECK failed - Memory size of vstream yolov3/input_layer1 does not match the frame count! (Expected 1108992, got 0)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_ARGUMENT(2)
while the command from hailortcli is working fine on the same image.
hailortcli run yolov3.hef --input-files "yolov3/input_layer1=image.bin" --csv result.csv
Running streaming inference (yolov3.hef):
Transform data: true
Type: auto
Quantized: true
Network yolov3/yolov3: 100% | 153 | FPS: 30.59 | ETA: 00:00:00
> Writing inference results to 'result.csv'... done.
> Inference result:
Network group: yolov3
Frames count: 153
FPS: 30.59
Send Rate: 271.39 Mbit/s
Recv Rate: 949.86 Mbit/s
What is wrong with my approach?