I am trying to run inference on Hailo 8 hardware with custom trained yolov8n model, code as shown:
model_name = 'yolov8n'
compiled_model_har_path = f'{model_name}_compiled_model.har'
runner = ClientRunner(hw_arch='hailo8', har=compiled_model_har_path)
with runner.infer_context(InferenceContext.SDK_HAILO_HW) as ctx:
nms_output = runner.infer(ctx, test_dataset_new)
It gives following error msg:
[HailoRT] [error] CHECK failed - UserBuffQEl13yolov8n_official/conv41 (D2H) failed with status=HAILO_TIMEOUT(4) (timeout=10000ms)
[HailoRT] [error] Failed waiting for threads with status HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - UserBuffQEl6yolov8n_official/conv63 (D2H) failed with status=HAILO_TIMEOUT(4) (timeout=10000ms)
[HailoRT] [error] CHECK failed - UserBuffQEl15yolov8n_official/conv52 (D2H) failed with status=HAILO_TIMEOUT(4) (timeout=10000ms)
[HailoRT] [error] CHECK failed - UserBuffQEl12yolov8n_official/conv42 (D2H) failed with status=HAILO_TIMEOUT(4) (timeout=10000ms)
[HailoRT] [error] Failed waiting for threads with status HAILO_TIMEOUT(4)
[HailoRT] [error] Failed waiting for threads with status HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - UserBuffQEl11yolov8n_official/conv62 (D2H) failed with status=HAILO_TIMEOUT(4) (timeout=10000ms)
[HailoRT] [error] Failed waiting for threads with status HAILO_TIMEOUT(4)
[HailoRT] [error] Failed waiting for threads with status HAILO_TIMEOUT(4)
Backend TkAgg is interactive backend. Turning interactive mode on.
To avoid compatibility issues, itâs best to leave the DFC/MZ venv for their tools. So once you have a compiled model, you can use the hef on a separate venv for the inference.