Hailort.log unaligned buffer

Hello,

When trying to run the following code:

# import degirum and degirum_tools
import degirum as dg, degirum_tools

# set model name, inference host address, zoo url, token, and image source
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1"
image_source='./assets/ThreePersons.jpg'
inference_host_address = "@local" # set to @local if you want to run inference on your local machine
zoo_url = "degirum/models_hailort"
#token = degirum_tools.get_token() # paste your token here or leave it empty if running on local machine
token = ''

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    token=token
)

# perform AI model inference on given image source
print(f" Running inference using '{model_name}' on image source '{image_source}'")
inference_result = model(image_source)

# print('Inference Results \n', inference_result)  # numeric results
print(inference_result)
print("Press 'x' or 'q' to stop.")

# show results of inference
with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result)

I get this error:
[2025-02-08 22:48:50.741] [3117] [HailoRT] [warning] [vdma_stream.cpp:372] [read_async_impl] read_async() was provided an unaligned buffer (address=0x7fff58325000), which causes performance degradation. Use buffers algined to 16384 bytes for optimal performance

Anybody else encountering this?

Hi @capytala.ro
We have not seen this error before. Is this an error or a warning? In other words, does the inference still work or does the program crash/quit?

Here is a screenshot of the hailort.log file

Hi Shashi, the inference works just fine, the only downside is this warning shows up and fills in the log file, so I ended up disabling the logging by using “export HAILORT_LOGGER_PATH=NONE”.
I am running an Hailo-8L board on a raspberry pi5 4GB model.

Hi @capytala.ro
Thanks for this information. We will try to see if we can replicate this. So far, we have not encountered this error. Is it possible for you to check if this happens only for this model or for all models?

Hi @shashi,
This is happening for any model I try.