Hi, I have recently been adapting some new models for Halio RT and have some confusion regarding the input and output tensors of Halio RT, for which I am seeking assistance.
Here are my steps:
- Create
ConfiguredInferModel
fromInferModel
. - Obtain the
shape
,format
, andframe_size
of the input and output tensors fromInferModel
. - Allocate memory for the input and output Tensors on the Host heap with a size of
frame_size
. - Bind the input and output tensors to the allocated memory using
set_buffer
throughConfiguredInferModel::Bindings
.
I have the following doubts during this process:
- Knowing the
shape
andformat.type
of a tensor, can I assume that theframe_size
on the Host is already determined and is its byte size? In other words, do I need to consider the memory layout on the Device side when allocating it? - Based on the above steps, when parsing the output, do I only need to care about the
format.order
in the Host’s memory layout? For example, withHAILO_FORMAT_ORDER_FCR
, do I just need to iterate my allocated memory according to[N, H, W, C]
? - After
set_buffer
, when I need to predict the next frame, do I still need to callset_buffer
again?
Furthermore, assuming there’s a shape of 1, 1, 2, 2 for a UINT8 FCR tensor, I should allocate a memory size of 1 * 2 * 2 * 1 * sizeof(UINT8) = 4 bytes on the Host. Is the possible memory layout on the Device the same as the example below?
/**
* FCR means first channels (features) are sent to HW:
* - Host side: [N, H, W, C]
* - Device side: [N, H, W, C]:
* - Input - channels are expected to be aligned to 8 bytes
* - Output - width is padded to 8 bytes
*/
- Host
value(addr)
flattened: [ 1(0x00) , 2(0x01) , 3(0x02) , 4(0x03) ]
index: 0,0,0,0 | 0,0,0,1 | 0,0,1,0 | 0,0,1,1
N,H,W,C
- Device (Input)
flattened: [ 1(0x00) ... 2(0x08) ... 3(0x10) ... 4(0x18) ]
index: 0,0,0,0 | 0,0,0,1 | 0,0,1,0 | 0,0,1,1
- Device (Output)
flattened: [ 1(0x00) , 2(0x01) ... 3(0x08) , 4(0x09) ]
index: 0,0,0,0 | 0,0,0,1 | 0,0,1,0 | 0,0,1,1
If there are any mistakes in my understanding, please tell me. Thank you!