Hi all, I’m trying to run inference on a custom trained yolov8m model. My input is 448x448x3 so I understand the expected 602112, but no idea why it thinks the input buffer size is 0. I have confirmed that the images put in are 448x448x3 to meet the criteria:
Processed image size: (448, 448)
Processed image nparray shape: (448, 448, 3)
Enqueueing batch
Batch enqueued
[HailoRT] [error] CHECK failed - Input buffer size 0 is different than expected 602112 for input 'yolov8m/input_layer1'
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6)
Here’s the utils python file I’m using. I don’t remember where I found it:
In my case, I receive the following output from the log:
[2024-09-24 13:02:14.287] [2964] [HailoRT] [error] [infer_model.cpp:791] [validate_bindings] CHECK failed - Input buffer size 0 is different than expected 1228800 for input ‘yolov5m_wo_spp/input_layer1’
[2024-09-24 13:02:14.287] [2964] [HailoRT] [error] [infer_model.cpp:855] [run_async] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6)
[2024-09-24 13:02:14.287] [2964] [HailoRT] [info] [async_infer_runner.cpp:86] [shutdown] Pipeline was aborted. Shutting it down
[2024-09-24 13:02:14.287] [2964] [HailoRT] [error] [infer_model.cpp:596] [run_async] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [async_infer_runner.cpp:86] [shutdown] Pipeline was aborted. Shutting it down
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [async_infer_runner.cpp:86] [shutdown] Pipeline was aborted. Shutting it down
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:527] [execute_deactivate] enqueue() in element PushQEl_nms1yolov5m_wo_spp/conv93_132 was aborted, got status = HAILO_SHUTDOWN_EVENT_SIGNALED(57)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:527] [execute_deactivate] enqueue() in element PushQEl_nms0yolov5m_wo_spp/conv84_132 was aborted, got status = HAILO_SHUTDOWN_EVENT_SIGNALED(57)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:527] [execute_deactivate] enqueue() in element PushQEl_nms2yolov5m_wo_spp/conv74_132 was aborted, got status = HAILO_SHUTDOWN_EVENT_SIGNALED(57)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:527] [execute_deactivate] enqueue() in element PushQEl3yolov5m_wo_spp/input_layer1 was aborted, got status = HAILO_SHUTDOWN_EVENT_SIGNALED(57)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:527] [execute_deactivate] enqueue() in element EntryPushQEl0yolov5m_wo_spp/input_layer1 was aborted, got status = HAILO_SHUTDOWN_EVENT_SIGNALED(57)
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:43] [~BaseQueueElement] Queue element EntryPushQEl0yolov5m_wo_spp/input_layer1 has 0 frames in his Queue on destruction
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:43] [~BaseQueueElement] Queue element PushQEl3yolov5m_wo_spp/input_layer1 has 0 frames in his Queue on destruction
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:43] [~BaseQueueElement] Queue element PushQEl_nms2yolov5m_wo_spp/conv74_132 has 0 frames in his Queue on destruction
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:43] [~BaseQueueElement] Queue element PushQEl_nms0yolov5m_wo_spp/conv84_132 has 0 frames in his Queue on destruction
[2024-09-24 13:02:14.288] [2964] [HailoRT] [info] [queue_elements.cpp:43] [~BaseQueueElement] Queue element PushQEl_nms1yolov5m_wo_spp/conv93_132 has 0 frames in his Queue on destruction
looks like you’re hitting a snag with that Input buffer size 0 error. This usually pops up when the input stream isn’t set up right before you try to run inference. Could be something funky with how you’re handling the input queue.
Here’s what you might wanna check out:
Make sure your image batch (that nparray thing) is the right shape when you’re stuffing it into the input queue. It should be flat or contiguous - try using nparray.flatten() if it’s not already.
Double-check how you’ve set up the input stream in HailoAsyncInference. The batch size and input dimensions (448x448x3) should match what your HEF model expects.
Take a look at your preprocessing step. Make sure it’s not messing up the image size or format before inference.
Try tweaking your code like this when you’re queuing up the image:
batch_array.append(nparray.flatten())
And give that input stream config another once-over to make sure everything lines up dimension-wise and type-wise.
Let me know if you’re still stuck after trying these out!
Based on your description, it appears you’re on the right track with flattening the image array. However, the issue might lie in how you’re handling batches or populating the input queue. Here are a couple of suggestions to help troubleshoot:
Batch Dimensions: Double-check that your batch_array has the correct shape and aligns with your model’s expected input format (typically batch_size × input_size).
Queue Format: The input_queue.put(batch_array) method may require a specific input structure. It could expect a single flattened array or a particular data format. I’d recommend reviewing the relevant documentation or code to confirm the expected input format for this queue.
The key is ensuring that your entire batch is structured in a way that matches what your model and queue system expect. If you’ve already verified these aspects, please provide more details about your model’s input requirements and any error messages you’re seeing. This will help us pinpoint the issue more accurately.
I’ve stripped everything down to bare bones to understand everything but the execution is a black box and the error is obviously wrong because the buffer is not “0” but I cannot proceed as well as using batch of 1 to make things simplier.
I have tried it flattened, not flattened, with a batch dimension, without a batch dimension, with different data types etc.
-I-----------------------------------------------
-I- Network Name
-I-----------------------------------------------
-I- IN: yolov8m/input_layer1
-I-----------------------------------------------
-I- OUT: yolov8m/yolov8_nms_postprocess
-I-----------------------------------------------
[ WARN:0@0.715] global cap.cpp:164 open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.10.0-dev) /app/Hailo-Application-Code-Examples/runtime/cpp/object_detection/general_detection_inference/opencv-4.x/modules/videoio/src/cap_images.cpp:274: error: (-215:Assertion failed) number < max_number in function 'icvExtractPattern'
terminate called after throwing an instance of 'char const*'
Aborted (core dumped)
Maybe the model is bad? It’s a re-trained yolov8m model but should be pretty standard. Here’s the ONNX visualizer for the input for the model the HEF is converted from:
It sounds like you’re encountering multiple issues when running inference on your retrained YOLOv8m model with the Hailo SDK. Here are some suggestions to help debug and resolve the problems:
Buffer Error: The buffer issue you’re seeing could be due to a mismatch in the input/output dimensions or data types. From the ONNX visualizer, your model expects input with the shape float32[1, 3, 448, 448] (batch size 1, 3 channels, 448x448 image), and output float32[1, 6, 4116]. Make sure you’re passing inputs with the correct shape and normalization (values between 0 and 1).
OpenCV Error: The error (-215: Assertion failed) suggests OpenCV might be struggling with file input/output. Double-check that OpenCV is loading and handling images correctly by simplifying the pipeline to basic image loading and display:
cv::Mat img = cv::imread("path_to_image");
if (img.empty()) {
std::cerr << "Failed to load image" << std::endl;
}
cv::imshow("Image", img);
cv::waitKey(0);
Input Normalization: Make sure you’re resizing the input to 448x448 and normalizing the pixel values:
Hailo SDK Debugging: Ensure the HEF file was generated and loaded correctly in the Hailo CLI. You could also try a simpler inference pipeline to isolate where the issue is occurring.
Post-Processing: Ensure your post-processing logic (like NMS or bounding box extraction) matches the model’s output format.
If the retrained model introduces changes like different image sizes or class definitions, this could lead to issues with your current pipeline. Testing the model with ONNX runtime or a Python inference might help rule out model-specific issues. Let me know if you need further details on debugging a specific part!