@omria,
I still have troubles getting the hef output right.
I am testing this simple network (trained to return the input)
class Autoencoder(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(in_channels=3,out_channels=3,kernel_size=3, stride=1, padding=1)
def forward(self, x):
x = self.conv(x)
return x
After conversion, the dimensions and weights in the har file match those in the onnx file. However, when I run the hef model, the output does not match the one from the onnx model.
Here are the outputs of the onnx and hef for a test input image:
Onnx output
Hef output
Do you have any idea what the problem could be?
I tried to rearrange the tensor dimensions in different ways, but I always get the same kind of output.
Further info:
The onnx file is generated by
model = Autoencoder()
model.load_state_dict(torch.load("model.tar"))
dummy_in = torch.randn(1,3,384,216)
torch.onnx.export(model, dummy_in, 'model.onnx', export_params=True, opset_version=15)
and the hef file by
hailo parser onnx --hw-arch hailo8l model.onnx
hailo optimize --hw-arch hailo8l --use-random-calib-set model.har
hailo compiler --hw-arch hailo8l model_optimized.har
The hef is run on a raspberry pi by
import cv2
from picamera2.devices import Hailo
hailo = Hailo("model.hef")
img_test=cv2.imread("test.png")
out = hailo.run(img_test)