How to increase Hailo_timeout value when using vstream CPP code

If you are using one of our CPP code based on vstreams to infer on Hailo-8 and if you are getting errors such as:

HailoRT] [error] Got HAILO_TIMEOUT while waiting for output stream buffer yolov8s_pose/conv58
[HailoRT] [error] Got HAILO_TIMEOUT while waiting for output stream buffer yolov8s_pose/conv70
[HailoRT] [error] Got HAILO_TIMEOUT while waiting for output stream buffer yolov8s_pose/conv57
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwReadEl2yolov8s_pose/conv58 (D2H) failed with status=HAILO_TIMEOUT(4)

then it means that the read of the output vstream took more time than the default timeout value of HAILO_DEFAULT_VSTREAM_TIMEOUT_MS= 10 s before the inference could output any data.
This can happen if you capture system which is feeding the frames to Hailo has long pause of more than 10 sec.
To cope with that issue, you can set the output vstream timeout to HAILO_INFINITE .

If we take the example Hailo-Application-Code-Examples/runtime/cpp/object_detection/general_detection_inference at main · hailo-ai/Hailo-Application-Code-Examples · GitHub,
you’ll need to replace the below code at line 388 of detection_inference.cpp:

auto vstreams_exp = VStreamsBuilder::create_vstreams(*network_group, QUANTIZED, FORMAT_TYPE);
if (!vstreams_exp) {
std::cerr << "Failed creating vstreams " << vstreams_exp.status() << std::endl;
return vstreams_exp.status();
}
auto vstreams = vstreams_exp.release();

with:

// Input VStream Params
auto input_vstream_params = network_group->make_input_vstream_params(QUANTIZED, FORMAT_TYPE, HAILO_DEFAULT_VSTREAM_TIMEOUT_MS, HAILO_DEFAULT_VSTREAM_QUEUE_SIZE);
if (!input_vstream_params){
std::cerr << "Failed to make input_vstream_params: " << input_vstream_params.status() ;
return input_vstream_params.status();
}

//Output VStream Params
auto output_vstream_params = network_group->make_output_vstream_params(QUANTIZED, FORMAT_TYPE, HAILO_INFINITE, HAILO_DEFAULT_VSTREAM_QUEUE_SIZE);
if (!output_vstream_params){
    std::cerr << "Failed to make output_vstream_params: ";
  return output_vstream_params.status();
}

// Input VStream
auto input_vstreams_exp = VStreamsBuilder::create_input_vstreams(*network_group, input_vstream_params.value());
if (!input_vstreams_exp){
std::cerr << "Failed to create input_vstreams_exp: ";
return input_vstreams_exp.status();
}
// Output VStream
auto output_vstreams_exp = VStreamsBuilder::create_output_vstreams(*network_group, output_vstream_params.value());
if (!input_vstreams_exp or !output_vstreams_exp) {
std::cerr << "Failed to create output_vstreams_exp: ";
return input_vstreams_exp.status();
}
// Releasing Input/Output VStreams to Class memebers
auto input_vstreams = input_vstreams_exp.release();
auto output_vstreams = output_vstreams_exp.release();

You’ll have to pass input_vstreams and output_vstreams to run_inference():

status = run_inference<uint8_t>(
std::ref(input_vstreams),
std::ref(output_vstreams),
input_path,
write_time_vec, inference_time, postprocess_end_time,
frame_count, org_height, org_width, image_num);