implementing Edge Layer for Key Frame Selection and Raw Video Streaming on Raspberry Pi 5 + Hailo-8

Hello!

I’m working on a project that uses a Raspberry Pi 5 with a Hailo-8 accelerator for real-time object detection and scene monitoring.

At the edge layer, the goal is to:

  1. Run a YOLOv8m model on the Hailo accelerator for local inference.

  2. Select key frames based on object activity or scene changes (e.g., when a new detection or risk condition occurs).

  3. Send only those selected frames to another device for higher-level processing.

  4. Stream the raw video feed simultaneously for visualization or backup.

I’d like some guidance on how to structure the edge layer pipeline so that it can both select and transmit key frames efficiently, while streaming the raw video feed

Thank you!

Any insights, architecture examples, or best practices for this type of edge setup would be greatly appreciated.

Thank you!

Hey @EZEKIEL_ESPIRITU,

Welcome to the Hailo Community!

I’d recommend checking out this repo: GitHub - hailo-ai/hailo-apps-infra

It has everything you need to get started. If you’re using a custom yolov8m model, just provide that—otherwise it’ll use the default yolov8m. Then in the callback, you can process the frames and results however you’d like!

Hope this helps!