Multi-devices Hailo-8 Python API

Good afternoon!
At the moment, I’m testing the code from HailoRT 4.20.0 User Guides using the multi-device startup Python API. The model is used compiled from HailoModelZoo yolov8m.hef
When running this code, there is absolutely no detection result.
Tell me, please, what could be the problem?

import numpy as np
from hailo_platform import (
    HEF,
    Device,
    VDevice,
    HailoStreamInterface,
    InferVStreams,
    ConfigureParams,
    InputVStreamParams,
    OutputVStreamParams,
    InputVStreams,
    OutputVStreams,
    FormatType,
    HailoSchedulingAlgorithm
)
import cv2
from threading import Thread


def infer(network_group, input_vstreams_params, output_vstreams_params, input_data):
    rep_count = 100
    with InferVStreams(network_group, input_vstreams_params, output_vstreams_params) as infer_pipeline:
        for i in range(rep_count):
            infer_results = infer_pipeline.infer(input_data)
            print(f"Итерация {i + 1}/{rep_count}")
            for stream_name, result in infer_results.items():
                if hasattr(result, 'shape'):
                    print(f"Results for {stream_name}: shape = {result.shape}")
                else:
                    print(f"Results for {stream_name}: {result}")

def create_vdevice_and_infer(hef):
    params = VDevice.create_params()
    params.scheduling_algorithm = HailoSchedulingAlgorithm.ROUND_ROBIN
    params.multi_process_service = True
    params.device_count = 2
    params.group_id = "SHARED"

    with VDevice(params=params) as target:
        configure_params = ConfigureParams.create_from_hef(hef=hef,
                                                           interface=HailoStreamInterface.PCIe)
        # model_name = hef.get_network_group_names()[0]
        # batch_size = 1
        # configure_params[model_name].batch_size = batch_size
        
        network_groups = target.configure(hef, configure_params)
        network_group = network_groups[0]

        input_vstreams_params = InputVStreamParams.make(
            network_group,
            format_type=FormatType.FLOAT32,
        )
        output_vstreams_params = OutputVStreamParams.make(
            network_group,
            format_type=FormatType.FLOAT32
        )

        input_vstream_info = hef.get_input_vstream_infos()[0]        
        image_height, image_width, channels = input_vstream_info.shape

        image_path = 'yolo_on_hailo8/input-images/input_image0.jpeg'
        image = cv2.imread(image_path)
        if image is None:
            raise FileNotFoundError(f"Images {image_path} not found")

        resized_image = cv2.resize(image, (image_width, image_height))
        rgb_image = cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB)
        input_data = np.expand_dims(rgb_image, axis=0).astype(np.uint8) 
        input_data = input_data.astype(np.float32) / 255.0
        input_data = {input_vstream_info.name: input_data}
        
        infer(network_group, input_vstreams_params, output_vstreams_params, input_data)


if __name__ == "__main__":
    hef_path = "yolov8x.hef"
    hef = HEF(hef_path)

    infer_process = Thread(target=create_vdevice_and_infer, args=(hef,))
    infer_process.start()
    infer_process.join()
    print('Done inference')

Hey @alex.mtnkv ,

There are several potential reasons why your YOLOv8m model compiled from Hailo Model Zoo is not producing detection results. Here are the most likely causes and how to fix them:

Possible Causes & Fixes

1. Post-Processing is Missing

The YOLOv8 model outputs raw predictions that need to be processed to get actual bounding boxes, class IDs, and confidence scores. Take a look at our Python Async API, which is our latest and best way to inference using Python:

2. Incorrect Preprocessing of Input Image

Your preprocessing pipeline must match what was expected during model training and compilation. The issue might be in:

  • Normalization: The input image is converted to float32 and divided by 255 (input_data.astype(np.float32) / 255.0). However, YOLOv8m models typically expect uint8 inputs.
  • Data Format: Some models expect NCHW format, but your input may be in NHWC.

Fix
Modify the preprocessing to match the model:

input_data = np.expand_dims(rgb_image, axis=0).astype(np.uint8)  # Keep as uint8
input_data = {input_vstream_info.name: input_data}

Check the expected format in your YOLOv8m.hef by running:

print(hef.get_input_vstream_infos())

3. Incorrect Data Format in Output Streams

Your output format is set to FLOAT32, but most YOLO models output UINT8 or INT8.

Fix
Modify the output format:

output_vstreams_params = OutputVStreamParams.make(
    network_group,
    format_type=FormatType.UINT8  # Change to UINT8 or INT8
)

4. Checking Inference Output

You are printing only the shape of the results:

print(f"Results for {stream_name}: shape = {result.shape}")

This does not confirm that valid detections exist.

Fix
Print actual values:

print(f"Results for {stream_name}: {result}")