RPi CM5 K3s cluster and hailo_ai_sw_suite docker image for arm64

Hello,
I’ve setup a clean K3s environment on RaspberryPi’s running default Bookworm (desktop and wifi disabled), intentionally not on docker to preserve resources. I could really use some help getting ahold of the latest hailo ai sw suite docker image in arm64.

The environment is based on the https://computeblade.com/ chassis and dev boards, all blades are running CM5 modules. 3xblades CP/Etcd, 5xblades worker with Hailo-8’s, and 4xblades with 2TB storage each in Longhorn.
In front of the blades I have 2x Pi5’s in HAProxy, and 3x Pi5’s running Rancher.

The goal is to run the hailo ai sw suite on the workers and share their resources on projects. It looks like the downloadable version from the site is x86 and does not include the original docker build file so I can’t rebuild it with docker in arm64.
Of course, I could be missing something as I am fairly new to everything but the Pi’s, so any help is greatly appreciated.

Hey @Seaman55,

The AI software suite Docker is unfortunately x86-only, as it’s too resource-intensive for the Raspberry Pi.

If you want to use Docker on your RPi, I’d recommend building your own custom container. You won’t need the DFC (which doesn’t work on RPi anyway), and you only need the core Tappas package rather than the full suite.

You can run Docker directly on the RPi after completing the standard RPi and Hailo installation (which is straightforward).

For your Docker container, include just these components:

  1. Hailort driver
  2. Hailort
  3. Hailort Python bindings
  4. Hailo-tappas-core
  5. Hailo-tappas-core Python bindings

Actually I’m not using Docker at all in the Kubernetes environment, and don’t intend to. It’s not K3s on Docker on RPi, or Docker on K3s on RPi, it is just clean K3s running on RPi. I’m only running Docker from my laptop, and solely for the purpose of attempting to rebuild the Docker image in arm64. The Kubernetes Cluster is Raspberry Pi, true, but the Hailo-8 compute nodes equate to 20x arm64 cores, and 20x GB of memory and 5x Hailo-8 modules for processing, and 8TB of storage available in Longhorn. Once I prove this works, I’ll increase it to 32 cores and 32GB mem and add 3x more Hailo modules for processing.

I guess from what you are saying my only path forward would be to build the docker image from my laptop in arm64 and including only these components:

  1. Hailort driver
  2. Hailort
  3. Hailort Python bindings
  4. Hailo-tappas-core
  5. Hailo-tappas-core Python bindings

Correct?

This mainly comes from me being relatively positive that a Docker image can be deployed on K3s and be assigned to all the nodes in the cluster if needed… I’m not sure if a Docker container can perform the same.