Hi everyone! I have a Raspberry Pi 5 with a Hailo-8 Hat, two image sensors and a yolov8pose.hef model. I am currently able to get my frames data through a C++ project without problems and to execute a sequential inference of the images following the code examples at the link Hailo-Application-Code-Examples/runtime/hailo-8/cpp/pose_estimation/yolov8_pose/yolov8pose_example.cpp at main · hailo-ai/Hailo-Application-Code-Examples · GitHub.
However, for performance reasons I would need to perform a parallel inference, in which I get the frames for each sensor (assume them already syncronized), send them to HAILO for processing and get the two results at the same time. In the code examples, I had trouble understanding the line
auto input_thread(std::async(write_all, std::ref(input_vstream[0]), input_path, std::ref(write_time_vec), std::ref(frames), std::ref(cmd_img_num)));
I tried using two input threads passing a std::ref(input_vstream[0]) and a std::ref(input_vstream[1]) but got a segmentation fault. Could this the correct way of obtaining parallel inference?
Thanks