Hi community
I’m inferencing the yolov8 instance segmentation model and depth model parallel (schedular) on the device Hailo-8, I manually modified from (hailo_models_zoo) post-processing functions for the segmentation model, But the issue is that the post-processing function takes a longer time than the model inference time, I see potential solutions is using libyolo_post.so functions available on Tappas, using GStreamer can significantly reduce post-processing time, but I planing use multiprocessing instead of GStreamer pipelines, is there possible suggestions using libyolo**_post.so function without GStreamer?