I am using the Raspberry Pi 5 with the 24TOPs AI Hat.
Here is what I want to do.
There is currently one camera, and I want to feed the stream into a YOLO model. The output of that yolo model will then need to go into another YOLO model and potentially a classification model. How can I implement this on the RPI?
Please note that the cascade is conditional. It may need to go to a different model depending on the detected class.
Something like below.
Yolo > Yolo > Classification
|
> Classification
I’ve been able to convert my model to HEF and get it running with the basic rpi-example object detection. But unsure how to proceed with cascading multiple outputs.
Ideally, I do not want the video stream to be running at all times either, only when there is significant change to the previous frame. This is in hopes to reduce the power consumption.