Hey I want to build my own custom postprocessing .so

@omria
I have figured out the post process issue with yoolv8_pose on my trained model.
See the keypoints output type. those are different.

To confirm this issue. I build another hef with all 8bit. And this works.
In my case I need as much accurate and precise value I can get.
So can you tell me at what places I need to modify the code to handle this 16bit data. I applied modifications at a few places but no change.

Actual Issue with 16 Bits -
In c++ post processing, those keypoints values are converted to uint8_t type due to how it’s is implemented.
Everywhere tensors are considered as uint8_t. I believe that’s the issue.

the get_xtensor by default return uint8 type. This can be changed to

auto output_keypoints = common::get_xtensor_uint16(raw_keypoints[i]);        

But this is not working.

Can you tell me at what places I need to modify?