Tensors are in different format in hailonet and Hailo async Inference callback

Hi,
As explained in tappas/docs/write_your_own_application/write-your-own-python-postprocess.rst at master · hailo-ai/tappas · GitHub , calling my_array = np.array(my_tensor, copy=False), should automatically dequantize the tensor from integer format to float.

Note that we have yolov8_pose_estimation post-processing plugin implemented in CPP: tappas/core/hailo/libs/postprocesses/pose_estimation/yolov8pose_postprocess.cpp at master · hailo-ai/tappas · GitHub
You can use it by adding the gstreamer element

hailofilter name=pose-estimation so-path=$landmarks_postprocess_so

to your pipeline after hailonet.

$landmarks_postprocess_so should be equal to $TAPPAS_WORKSPACE/apps/h8/gstreamer/libs/post_processes/libyolov8pose.so