How to set VDevice Network Group Name

With my below infer method, my hailo log is filling up with

[2025-08-14 15:59:27.259] [2844] [HailoRT] [info] [hef.cpp:1994] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: model
[2025-08-14 15:59:27.364] [2844] [HailoRT] [info] [hef.cpp:1994] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: model
[2025-08-14 15:59:27.463] [2844] [HailoRT] [info] [hef.cpp:1994] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: model
[2025-08-14 15:59:27.564] [2844] [HailoRT] [info] [hef.cpp:1994] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: model


def infer(should_use_multi_process_service=False, model_path=None):
    # Create a VDevice
    params = VDevice.create_params()
    params.scheduling_algorithm = HailoSchedulingAlgorithm.ROUND_ROBIN
    params.group_id = "SHARED"
    if should_use_multi_process_service:
        params.multi_process_service = should_use_multi_process_service

    with VDevice(params) as vdevice:
        hef = pyhailort.pyhailort.HEF(model_path)
        network_group_names = hef.get_network_group_names()

        # Create an infer model from an HEF:
        infer_model = vdevice.create_infer_model(model_path, name=network_group_names[0])

        # Set optional infer model parameters
        infer_model.set_batch_size(1)

        # For a single input / output model, the input / output object
        # can be accessed with a name parameter ...
        if "model.hef"  in model_path:
            infer_model.input("model/input_layer1").set_format_type(FormatType.UINT8)
            infer_model.output().set_format_type(FormatType.FLOAT32)
        else:
            infer_model.input("vits_indoor_224_224/input_layer1").set_format_type(FormatType.UINT8)
            infer_model.output().set_format_type(FormatType.UINT8)

        # Once the infer model is set, configure the infer model
        with infer_model.configure() as configured_infer_model:
            for _ in range(number_of_frames):
                # Create bindings for it and set buffers
                bindings = configured_infer_model.create_bindings()
                bindings.input().set_buffer(np.empty(infer_model.input().shape).astype(np.uint8))
                bindings.output().set_buffer(np.empty(infer_model.output().shape).astype(np.float32 if "model.hef" in model_path else np.uint8))

                # Wait for the async pipeline to be ready, and start an async inference job
                configured_infer_model.wait_for_async_ready(timeout_ms=10000)

                # Any callable can be passed as callback (lambda, function, functools.partial), as long
                # as it has a keyword argument "completion_info"
                job = configured_infer_model.run_async([bindings], partial(example_callback, bindings=bindings))

            # Wait for the last job
            job.wait(timeout_ms)

I found this forum post PythonAPI: hailort.log polluted by 'No name was given' every video-frame - #3 by omria

But this does not work with how I am currently doing the inference, and I can’t find any documentation on how to set the network group names using the VDevice object as i am now. Could someone give some guidance on this issue? The hailort logs are getting spammed with the message for every inference making it hard to debug. Thank you!

Hey @connor.malley,

Welcome back to the Hailo Community!

We suggest for usage of python inference API and cpp inference API , Please check out how we do it in here : Hailo-Application-Code-Examples/runtime/hailo-8/python/common/hailo_inference.py at main · hailocs/Hailo-Application-Code-Examples · GitHub