Hello Hailo Community,
I am working with the following setup and would appreciate guidance on the correct workflow.
Current hardware / environment
-
Raspberry Pi 5
-
Raspberry Pi AI Kit (M.2 HAT+ with Hailo AI Module, Hailo-8 family)
-
Camera pipeline is already working
-
Object recognition with camera input is functional on the Raspberry Pi
-
Development host: Ubuntu 24.04 x86_64 with Docker-based Hailo AI Software Suite
What is working
-
The Hailo device is recognized
-
Camera-based object detection examples run successfully
-
ONNX → HEF compilation flow has been verified with simple test models
What I am trying to do
I am an echocardiographer and would like to test whether Hailo can distinguish anatomical structures in echocardiography.
Specifically:
-
PSAX (parasternal short-axis) echocardiography images
-
Classification or recognition of
-
AV (aortic valve)
-
PV (pulmonary valve)
-
The initial goal is not clinical-grade performance, but to verify whether this anatomical distinction is feasible on Hailo.
My understanding so far
-
To run a custom model on Hailo, an ONNX model is required as input to the Hailo compiler
-
The ONNX model must be created by training a neural network (e.g., classification or detection) on labeled images
-
Hailo itself does not provide medical or echocardiography-specific ONNX models
Questions
-
Is this understanding correct that a custom-trained ONNX model is required for this use case?
-
For a first attempt, would frame-level classification (PSAX_AV vs PSAX_PV) be the recommended approach over detection/segmentation?
-
Are there any reference ONNX models, operator constraints, or best practices recommended for Hailo when training and exporting ONNX models for custom image classification tasks?
-
Are there known limitations or considerations when using grayscale or ultrasound-like images with Hailo?
Any guidance, documentation pointers, or practical advice from similar use cases would be greatly appreciated.
Thank you in advance.