[FIX] Running Hailo Dataflow Compiler on GPU under WSL

Running the Hailo Dataflow Compiler using GPU on Windows under WSL is not officially supported as evidenced by few topics.

HOWEVER, the root issue is quite simple. The select_least_used_gpu() function is not returning any GPUs due to a very small default memory usage limit.

You can fix this with a simple one-liner when you’re inside the container. This needs to be run each time you restart the container

sed -i 's/max_memory_utilization=[0-9.]\+/max_memory_utilization=0.5/g' $HOME/.local/lib/python3.10/site-packages/hailo_model_optimization/acceleras/utils/nvidia_smi_gpu_selector.py

However, it ‘s better to patch the function to return at least one GPU if it is available. I suggest building your own docker image. I based mine on this: ( HailoConverter/Dockerfile at main · cyclux/HailoConverter · GitHub )

Make a file patch.txt with this content in the same dir as the Dockerfile

# Patched version to fix issue with GPU not working on WSL, goes in
# /home/$USER/.local/lib/python3.10/site-packages/hailo_model_optimization/acceleras/utils/nvidia_smi_gpu_selector.py

import subprocess

from hailo_model_optimization.acceleras.utils.logger import default_logger

logger = default_logger()


def select_least_used_gpu(max_memory_utilization=0.05):
    """
    Select the GPU with the lowest memory usage below the threshold.
    If none are below threshold, selects the least used GPU anyway.

    Args:
        max_memory_utilization: Maximum memory usage ratio (0-1). Default is 0.05 (5%)

    Returns:
        str: GPU index as string, e.g., '0' or '2', or None if no GPUs found
    """
    try:
        result = subprocess.run(
            ["nvidia-smi", "--query-gpu=index,name,memory.used,memory.total", "--format=csv,noheader,nounits"],
            stdout=subprocess.PIPE,
            text=True,
            check=True,
        )

        gpu_list = result.stdout.strip().split("\n")
        
        if not gpu_list or gpu_list[0] == '':
            logger.debug("No GPUs detected.")
            return None

        # Parse all GPUs and sort by usage ratio
        gpus = []
        for line in gpu_list:
            parts = line.split(",")
            index = int(parts[0].strip())
            name = parts[1].strip()
            used_mem = int(parts[2].strip())
            total_mem = int(parts[3].strip())
            usage_ratio = used_mem / total_mem
            gpus.append((usage_ratio, index, name, used_mem, total_mem))
        
        gpus.sort()  # Sort by usage_ratio (first element in tuple)

        # Select least used GPU
        usage_ratio, index, name, used_mem, total_mem = gpus[0]
        selected_gpu = str(index)
        
        if usage_ratio <= max_memory_utilization:
            logger.info(f"Selected GPU {selected_gpu} ({name}) - {usage_ratio:.1%} usage, {used_mem}MB/{total_mem}MB")
        else:
            logger.info(f"All GPUs above {max_memory_utilization:.1%} threshold. Selected GPU {selected_gpu} ({name}) - least used at {usage_ratio:.1%}, {used_mem}MB/{total_mem}MB")
        
        logger.debug(f"Selected GPU {selected_gpu} ({name})")
        return selected_gpu

    except subprocess.CalledProcessError as e:
        logger.debug(f"nvidia-smi command failed: {e}")
        return None
    except Exception as e:
        logger.debug(f"Unexpected error during GPU selection: {e}")
        return None

and apply it as a layer during the building of your docker image (after installation of the DFC).

# Apply Patch
COPY cp patch.txt /home/hailo/.local/lib/python3.10/site-packages/hailo_model_optimization/acceleras/utils/nvidia_smi_gpu_selector.py

If it still isn’t working, make sure you’re running the docker container with the nvidia runtime. Run this under your WSL distro and make sure nvidia-smi returns a reasonable result.

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Perhaps the developers can fix this issue in the next release.

Welcome to the Hailo Community!

Thank you for sharing.

I have forwarded the information to our developers.