Multi Network Process Initialization Issue in Python

Hi,
I am trying to work with multiple process. When I set
multi_proccess_service = True while creating VDevice. It does not work.

Only added line in HailoAsyncInference from HailoApplicationCode.

I am using trying to implement example from the website

import numpy as np
from functools import partial
from hailo_platform import VDevice, HailoSchedulingAlgorithm, FormatType

number_of_frames = 4
timeout_ms = 10000

def infer(should_use_multi_process_service):
    # Create a VDevice
    params = VDevice.create_params()
    params.scheduling_algorithm = HailoSchedulingAlgorithm.ROUND_ROBIN
    params.group_id = "SHARED"
    if should_use_multi_process_service:
        params.multi_process_service = multi_process_service

    with VDevice(params) as vdevice:

        # Create an infer model from an HEF:
        infer_model = vdevice.create_infer_model('../hefs/resnet_v1_18.hef')

        # Set optional infer model parameters
        infer_model.set_batch_size(2)

        # For a single input / output model, the input / output object
        # can be accessed with a name parameter ...
        infer_model.input("input_layer1").set_format_type(FormatType.FLOAT32)
        # ... or without
        infer_model.output().set_format_type(FormatType.FLOAT32)

        # Once the infer model is set, configure the infer model
        with infer_model.configure() as configured_infer_model:
            for _ in range(number_of_frames):
                # Create bindings for it and set buffers
                bindings = configured_infer_model.create_bindings()
                bindings.input().set_buffer(np.empty(infer_model.input().shape).astype(np.float32))
                bindings.output().set_buffer(np.empty(infer_model.output().shape).astype(np.float32))

                # Wait for the async pipeline to be ready, and start an async inference job
                configured_infer_model.wait_for_async_ready(timeout_ms=10000)

                # Any callable can be passed as callback (lambda, function, functools.partial), as long
                # as it has a keyword argument "completion_info"
                job = configured_infer_model.run_async([bindings], partial(example_callback, bindings=bindings))

            # Wait for the last job
            job.wait(timeout_ms)

I have 2 process as mentioned here…
https://hailo.ai/developer-zone/documentation/hailort-v4-19-0/?page=inference%2Finference.html#multi-process-service

I am using Hailo8l on raspberrypi
HailoRT version is 4.18.0.

Hey @saurabh

It looks like the error you’re encountering (HAILO_RPC_FAILED) is related to the HailoRT multi-process service not being active. Here’s how you can troubleshoot and resolve the issue:

1. Ensure HailoRT Service is Running:

The HailoRT multi-process service needs to be running in the background to manage multiple processes for inference. Here’s how you can check if the service is running:

Step 1: Verify the status of the HailoRT service:

systemctl status hailort

If it’s not running, start it with the following command:

sudo systemctl start hailort

Step 2: If you encounter issues starting the service or it’s not installed, you can reinstall it:

sudo apt install hailort-service
sudo systemctl enable hailort
sudo systemctl start hailort

2. Ensure Version Compatibility:

Double-check that the version of HailoRT (v4.19.0 in your case) and other Hailo components (like Tappas) are compatible with the multi-process inference feature.

  • HailoRT v4.19.0 supports multi-process service, but it’s important that your entire SDK and related components are up to date.

3. Code Configuration:

From the code you’ve shared, your implementation looks correct, but ensure that this line is present where you create the VDevice:

params.multi_process_service = True

This line should be included when multi-process inference is intended.

4. Test with Single Process:

To isolate whether the issue is specifically with the multi-process service, try running your inference pipeline with single-process mode by commenting out the multi-process line:

# params.multi_process_service = True

If the inference works in single-process mode, the issue is most likely related to the multi-process service configuration.

5. Check HailoRT Logs:

If the issue persists, check the HailoRT logs for any additional details:

journalctl -u hailort

Let me know if this helps, or if you need any more assistance!

Best regards,
Omri

@omria
Single Process is working, I have only issue when I set the params.multi_process_service = True.

1 Like
systemctl status hailort
sudo systemctl start hailort

Running these commands fixes the issue.

1 Like