Interpretation of FastSAM_s Output from HEF Model on Hailo, Missing Postprocessing or Prompt Support ?

Hello Hailo Community,

I’m currently using the FastSAM_s model provided in your model zoo (link: hailo_model_zoo), i’ve deployed the compiled corresponding .hef model on a Raspberry Pi 5 device (HAILO8).

The model accepts inputs of shape 640 x 640 x 3, which works as expected. However, after running inference on an image, the output tensor I receive is of shape 20 x 20 x 64 with float values. According to the Hailo profiler tool, this seems to match what the model was designed to output.

However, I’m unsure how to interpret this output. It doesn’t appear to be a usable segmentation mask (as I would expect for FastSAM), and I couldn’t find any documentation or example code that explains how to postprocess this into actual masks. Is there an official postprocessing function or head model available that converts this output into binary or instance segmentation masks ?

Additionally, I’m wondering if there are any Hailo examples available where I can provide point prompts, bounding boxes, or textual prompts, similar to the full capabilities of the original FastSAM model.
For reference, here’s the original FastSAM repository
Thanks in advance for your help! I’d appreciate any pointers or code examples you might have.

Best regards
Joshua.

Hey @Joshuaa,

You can start from this example.