Hi, I’m currently working on deploying a RetinaNet object detection model that was trained using PyTorch (torchvision) onto a Raspberry Pi 5 with the AI HAT+ (Hailo-8). Before proceeding further, I wanted to ask the community for clarification on whether RetinaNet is realistically deployable on Hailo-8 and what the expected deployment setup should look like. Specifically, I want to understand if the typical approach is to run only the neural network forward pass on the Hailo accelerator while keeping preprocessing and postprocessing steps such as anchor decoding and non-maximum suppression on the CPU, or if there are supported ways to offload more of the pipeline. I’m not attempting to modify or redesign the RetinaNet architecture and only want to deploy it as-is. Any guidance on known limitations, unsupported layers, or common pitfalls when exporting from PyTorch to ONNX and compiling to HEF would be greatly appreciated.
Hi @gigi,
We don’t currently have a RetinaNet model in the Model Zoo, but we do provide RetinaFace, which is quite similar architecturally (single‑stage, FPN‑based, anchor‑based). Because of that, there’s nothing about RetinaNet itself that would make it incompatible with Hailo‑8, and you generally shouldn’t run into major issues during compilation.
For the compilation flow, our best advice is to follow the DFC tutorials. You can find more details in the DFC guide: