Semantic Segmentation Model on Pi 5 AI Hat

I have developed a model using UNet, with the input being (3,320,412) and the output being (2,320,412). That being either 1 or 0 at a pixel level. The ONNX model that I have works fine on my pc.

However when attempting to compile it to work on my Rasp 5 with 8L hat I come up with some issue.

It either results with a model requiring (64,3,320,412) so 64 frames (batch?) or the output I get is either a blank black mask or a black and white stripped mask. I am using Gstreamer.

Where should I start trying to get this to work properly? I am very new to the world of models, machine vision etc.

Hey @Robert_Speedy,

I’d start by running this command:

hailortcli parse-hef {hef}

This will show you the inputs and outputs of your model. Once you have that info, you’ll need to write a post processing function in your gstreamer app. If you’re working with GitHub - hailo-ai/hailo-apps-infra, you’ll need to write it in C++ like we do.

Run that first command and share the outputs with me - I can help guide you through the next steps. If you’re still getting incorrect outputs after implementing the post processing, then it’s worth double-checking your compilation to make sure everything’s configured correctly.