Hi Hailo team,
I’ve been trying to run inference on a Raspberry Pi 5 with the Hailo-8L using both my own .hef
and the officially provided cas_vit_s.hef
, but I consistently receive the following error:
[HailoRT] [error] Trying to write to vstream before its network group is activated
HailoRTNetworkGroupNotActivatedException
Here’s what I’ve confirmed:
My .hef
file is compiled for HAILO8L
, verified via hailortcli parse-hef
I’m using a known-good .hef
(cas_vit_s.hef
) from the Model Zoo
I’m calling network_group.activate()
before creating InferVStreams
I’m passing a uint8
, contiguous NumPy tensor of the correct size (e.g., (384, 384, 384, 3)
for cas_vit_s
)
I’ve tested using a minimal script (see below), rebooted
Still, inference fails with the same activation error
from pathlib import Path
import numpy as np
from hailo_platform import (
HEF, VDevice, ConfigureParams, HailoStreamInterface,
InputVStreamParams, OutputVStreamParams, InferVStreams
)
hef = HEF("cas_vit_s.hef")
vdev = VDevice()
config = ConfigureParams.create_from_hef(hef, interface=HailoStreamInterface.PCIe)
network_group = vdev.configure(hef, config)[0]
network_group.activate()
input_params = InputVStreamParams.make(network_group)
output_params = OutputVStreamParams.make(network_group)
input_name = list(input_params.keys())[0]
output_name = list(output_params.keys())[0]
dummy_input = np.random.randint(0, 255, size=(384, 384, 384, 3), dtype=np.uint8)
dummy_input = np.ascontiguousarray(dummy_input)
with InferVStreams(network_group, input_params, output_params) as pipeline:
output = pipeline.infer({input_name: dummy_input})
Any idea what might be causing this consistent activation error, even in a minimal test?
Thanks so much for your help — and for this great platform.
I think I found the issue. I had missed this: with network_group.activate():
. So this script now runs inference:
from pathlib import Path
import numpy as np # type: ignore
from hailo_platform import ( # type: ignore
HEF,
ConfigureParams,
HailoStreamInterface,
InferVStreams,
InputVStreamParams,
OutputVStreamParams,
VDevice,
)
# Load known-good Hailo-8L model
hef_path = Path("cas_vit_s.hef")
if not hef_path.exists():
raise FileNotFoundError(hef_path)
hef = HEF(str(hef_path))
vdev = VDevice()
config = ConfigureParams.create_from_hef(hef, interface=HailoStreamInterface.PCIe)
network_group = vdev.configure(hef, config)[0]
input_params = InputVStreamParams.make(network_group)
output_params = OutputVStreamParams.make(network_group)
input_name = list(input_params.keys())[0]
output_name = list(output_params.keys())[0]
# Create dummy input: batch of 384 NHWC images (384x384x3)
dummy_input = np.random.randint(0, 255, size=(384, 384, 384, 3), dtype=np.uint8)
dummy_input = np.ascontiguousarray(dummy_input)
with network_group.activate():
with InferVStreams(network_group, input_params, output_params) as pipeline:
output = pipeline.infer({input_name: dummy_input})
result = output[output_name]
print("✅ Inference succeeded")
print("Result shape:", result.shape)
print("Sample output:", result[0][:10])
omria
May 27, 2025, 1:37pm
3
Hey @Mats_Gustafsson ,
Welcome to the Hailo Community !
Nice Catch with the error , i would look into this (Async infernce which is faster )
from typing import List, Generator, Optional, Tuple, Dict
from pathlib import Path
from functools import partial
import queue
from loguru import logger
import numpy as np
from hailo_platform import (HEF, VDevice,
FormatType, HailoSchedulingAlgorithm)
IMAGE_EXTENSIONS: Tuple[str, ...] = ('.jpg', '.png', '.bmp', '.jpeg')
class HailoAsyncInference:
def __init__(
self, hef_path: str, input_queue: queue.Queue,
output_queue: queue.Queue, batch_size: int = 1,
input_type: Optional[str] = None, output_type: Optional[Dict[str, str]] = None,
send_original_frame: bool = False) -> None:
"""
Initialize the HailoAsyncInference class with the provided HEF model
file path and input/output queues.
This file has been truncated. show original
Thanks @omria !
I will definitely look into async inference. This is all pretty new to me, so I need to take one small step at a time
Best regards, Mats
1 Like