Hello,
I would like to report an issue where Hailo Model Zoo (hailomz compile) consistently fails to detect the GPU and always falls back to CPU-only optimization, despite CUDA 13 and cuDNN 9.16 being properly installed and functional on the system.
This appears to be caused by the TensorFlow version bundled inside the Hailo AI Software Suite, which does not recognize the GPU on Ubuntu 24.04.
Below are all relevant system details and findings.
System Information
Hardware
-
GPU: NVIDIA RTX 5060 Laptop GPU
-
CPU: Intel Core Ultra 9 275HX
-
RAM: 48 GB
Operating System
-
Ubuntu 24.04 Desktop (clean installation)
-
Kernel:
6.14.0-36-generic -
Secure Boot: Disabled
NVIDIA / CUDA / cuDNN Setup
NVIDIA Driver
Driver Version: 580.95.05
CUDA Version: 13.0
CUDA Toolkit Installation
Installed following NVIDIA’s official instructions:
sudo dpkg -i cuda-repo-ubuntu2404-13-0-local_13.0.2-580.95.05-1_amd64.deb
sudo apt install cuda-toolkit-13-0
Toolkit confirmed installed:
/usr/local/cuda-13.0/bin/nvcc
cuDNN Installation
Installed from NVIDIA official .deb repository:
sudo apt install cudnn9 cudnn9-cuda-13 libcudnn9-dev-cuda-13
Headers exist:
/usr/include/x86_64-linux-gnu/cudnn_version.h
Libraries visible via:
ldconfig -p | grep libcudnn
Hailo AI Software Suite
Installed using:
hailo_ai_sw_suite_2025-10.run
Hailo virtual environment:
~/hailo_ai_sw_suite/hailo_venv
System requirements script reports everything OK
V | GPU driver version is 580.
V | CUDA version is 13.0.
V | CUDNN version is 9.16.
V | All required packages found.
Problem: hailomz compile always says “no available GPU”
Running:
hailomz compile yolov8s \
--ckpt best.onnx \
--hw-arch hailo8 \
--calib-path ./calib_images \
--classes 1
Results in:
[warning] Reducing optimization level to 0 ...
because there's no available GPU
This happens even though:
-
nvidia-smishows the GPU is available -
CUDA Toolkit 13.0 is installed
-
cuDNN 9.16 is installed
-
Hailo system check reports success
Root Cause Identified: TensorFlow inside hailo_venv cannot load CUDA
Hailo determines GPU availability via:
hailo_model_optimization/acceleras/utils/tf_utils.py
TensorFlow checks GPU usability by:
tf.config.list_physical_devices("GPU")
tf.config.experimental.set_memory_growth(gpus[0], True)
with tf.device("/GPU:0"):
tf.constant(1.0)
I tested this manually inside hailo_venv:
python - << 'EOF'
import tensorflow as tf
print("TF version:", tf.__version__)
print("Physical GPUs:", tf.config.list_physical_devices("GPU"))
EOF
Output shows TensorFlow cannot load CUDA
Could not find cuda drivers on your machine, GPU will not be used.
Skipping registering GPU devices...
Physical GPUs: []
So TensorFlow cannot use CUDA or cuDNN at runtime, even though:
-
The system CUDA installation is correct
-
cuDNN is installed and detected by ldconfig
-
Hailo’s own system checker recognizes CUDA and cuDNN
This causes:
has_gpu = False → optimization_level = 0
Therefore Hailo Model Optimization always runs on CPU.
Questions
1. Which CUDA / cuDNN versions are officially supported by the TensorFlow included in the Hailo AI Suite?
The system checker validates only minimal versions, but TensorFlow runtime compatibility appears different.
2. Is there a supported method to replace or override the bundled TensorFlow wheel with one compatible with CUDA 13.0 + cuDNN 9.16?
For example:
-
Custom-compiled TensorFlow 2.18+
-
A wheel built against CUDA 13
-
NVIDIA-provided
nvidia-tensorflowbuilds
3. Will future Hailo SDK releases provide TensorFlow builds compatible with Ubuntu 24.04 + CUDA 13?
GPU acceleration significantly impacts performance during model optimization.
Thank you
I’m happy to provide more logs or environment details if needed.
Thank you for your assistance.