Inferencing own model

I compiled the Inception-v3 model from PyTorch to ONNX.
Optimized it through DFC, got the HEF model.
HailoRTCLI run measures FPS on it with no problem.

I understand how to get frames in example applications, but I do not understand how to inference the model.

There are postprocessing functions used for YOLO models in examples, and they are written in C++.

Can I inference models using Python only?
Regardless of the result, I want to know how to do it.

Hey @avoqun,

Are you using the Raspberry Pi AI kit? If so, the official Python API will be released very soon in the next update.

For reference on how to use it, be sure to check out the examples provided in this repo:

Let me know if you have any other questions!
Regards

Perhaps you can pin a topic regarding the release of the Python API for Raspberry Pi with a rough timeframe. Seems like a lot of people asking!

Thanks for the suggestion, will do it with what will be released.

Yes. You can use YOLO on python.

I combined the inference part of the Python example in the HailoRT document , and only the pre- and post-processing part of YOLOv5 source code in Github.

I tested it on both Windows and Linux and it works well.