My task is to translate a onnx model to hef.
At the beginning, the input of onnx is dynamic like['batch', 3, 'ax1', 'ax2']
I test this model it using onnxruntime on Python3:
ort_sess = onnxruntime.InferenceSession("./ppocr_det.onnx")
print(ort_sess.get_inputs()[0])
ort_inputs = {ort_sess.get_inputs()[0].name: x}
ort_outs = ort_sess.run(None, ort_inputs)
and it returned correctly.
Everything went well till now when I was trying to convert it to har file and test it in Emulator
from hailo_sdk_client import ClientRunner, InferenceContext
chosen_hw_arch = "hailo8l"
model_name = "ppocr_v4"
onnx_path = "ppocr_det.onnx"
hailo_model_har = f"{model_name}_hailo_model.har"
# translate onnx to har
runner = ClientRunner(hw_arch=chosen_hw_arch)
hn, npz = runner.translate_onnx_model(
onnx_path,
model_name,
start_node_names='x',
end_node_names='sigmoid_0.tmp_0',
net_input_shapes={"x": [1, 3, 640, 640]},
)
runner.save_har(hailo_model_har)
import cv2
import numpy as np
import matplotlib.pyplot as plt
# test native model
runner = ClientRunner(har=hailo_model_har)
images_path = "../data"
img = cv2.imread('IMG_0599.jpg', cv2.IMREAD_COLOR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img / 255.
img = img.astype(np.float32)
img=cv2.resize(img,(640,640))
img = np.array([img])
with runner.infer_context(InferenceContext.SDK_NATIVE) as ctx:
native_res = runner.infer(ctx, img)
At this point, a completely incorrect return value was obtained.
My question is:
- The input dimension of onnx is
[1, 3, 640, 640]
, while the input dimension of the har file I obtained is[-1, 640, 640, 3]
. Is this correct? What do these dimensions refer to respectively? - I tried to use the same image as Input of onnx and har. For the har file, I tried to skip the transport step and keep the [h, w, c] format as input. Unfortunately, I got completely different return, how can i fix it?