Hi there,
I’ve been trying to get into the Hailo-8 workflow and started from a very simply CNN model:
Even though such things are not directly on my job description, I always managed to get CNN models running from any framework and model type on different RT engines. But I struggle and fail with the Hailo tools, and let’s say: I’m not happy with the tools I got.
I’m using DFC docker image (hailo_ai_sw_suite_2024-10). I managed to parse the model and save it as HAR file without any complaints. In the optimize step, I receive odd Tensorflow messages:
I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_0’ with dtype int32
[[{{node Placeholder/_0}}]]
I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_0’ with dtype float and shape [1,224,224,1]
[[{{node Placeholder/_0}}]]
and I don’t know if these are relevant or not. I end up with a quantized model file though that I can compile and map to the Hailo-8 device.
When I try to use the model for inference on the physical device, I receive error messages:
[HailoRT] [error] CHECK failed - Memory size of vstream super-resolution-10/input_layer1 does not match the frame count! (Expected 1605632, got 7168)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_ARGUMENT(2)
…
hailo_platform.pyhailort.pyhailort.HailoRTInvalidArgumentException: Exception encountered when calling layer ‘hw_inference_model’ (type HWInferenceModel).Invalid argument. See hailort.log for more information
Call arguments received by layer ‘hw_inference_model’ (type HWInferenceModel):
• inputs=tf.Tensor(shape=(8, 224, 1), dtype=float32)
The input tensor shape was supposed to be (1, 224, 224, 1) or (1, 1, 224, 224) with dtype uint8. I have no idea where to look at now and in which stage I messed it up.
I’ve read through the Hailo Dataflow Compiler User Guide and the Tutorial notebooks stored on the docker image and pretty much did what was done there.
Any useful directions how to debug the consistency of each step and what’s important to check to make sure everything is on the right track?
Thanks!