I want to run inference using my custom trained yolov8. What i should do?
I have .pt file and onnx file.
Can I run inference by using onnxruntime:
or I must first convert onnx to hef file and then run inference by using i.e this code examples??
I try convert onnx to hef but i have some problems…
like:
raise FileNotFoundError(f"Couldn’t find dataset in {data_path}. Please refer to docs/DATA.rst.")
FileNotFoundError: Couldn’t find dataset in /home/usser/.hailomz/data/models_files/coco/2021-06-18/coco_calib2017.tfrecord. Please refer to docs/DATA.rst.
If you have not done so yet, please also work trough the tutorials in the Hailo AI Software Suite about the model conversion workflow. Inside the docker simply type:
Why is the normalization and output changing?
I saw on the graph after converting from onnx to har that my architecture is changing and I don’t know why.
Why do I have to use images from my dataset when using the zoo model?
Should I prepare the data in the same format as in the examples? Labels, images, folder tracking / val
And how to use config files?
…there is too much information out there, making me a little lost and I don’t know how to convert onnx to hef correctly