I compiled the Inception-v3 model from PyTorch to ONNX.
Optimized it through DFC, got the HEF model.
HailoRTCLI run measures FPS on it with no problem.
I understand how to get frames in example applications, but I do not understand how to inference the model.
There are postprocessing functions used for YOLO models in examples, and they are written in C++.
Can I inference models using Python only?
Regardless of the result, I want to know how to do it.
I combined the inference part of the Python example in the HailoRT document , and only the pre- and post-processing part of YOLOv5 source code in Github.
I tested it on both Windows and Linux and it works well.