Request: TensorFlow Lite Delegate

So I bought the Raspberry Pi AI Kit, assuming it would work as easy as the Coral Edge TPU USB Accelerator. Unfortunately it is not the case, and now I’m stuck trying to figure out the dependencies of the whole Hailo stack.

Won’t it be great if one could use the AI Kit (Hailo-8L) directly with TensorFlow Lite, just like one would use the Coral Edge TPU?

So my request is, please add a delegate to TensorFlow Lite to support the Hailo-8.

Not necessarily. Each framework comes with its own set of strengths and weaknesses. It often depends on the specific use case.

I will check with our R&D team. They likely already looked at it and I suspect there are some good reason why we do not support this.

What are you stuck at? Maybe we can answer your specific questions and soon you may be happy about using the Hailo-8L.

1 Like

My main goal is to get the Hailo-8L working on a Raspberry Pi 5, running Ubuntu 24.04, as that is the setup our industrial customers use. I’ll pop the details of the issues in another post.

I understand this, but on the other hand this causes fragmentation in the market, where we would rather want to see standardisation.

If a customer comes to use with an existing model, we would like to deploy that model on any platform: AMD64 CPU, Nvidia, Coral TPU, Hailo, MediaTek, etc. TensorFlow, and TF Lite looks like a good standard to support for this. But I’m open to suggestions.

The only thing we want to avoid here is to have to build and maintain a pipeline for each specific hardware accelerator. Especially if these pipelines need manual intervention - with context and knowledge of the model - like they currently do.

What I want to see is a ML workload that is as portable as a Linux amd64 binary - it will run on any CPU supporting the amd64 instruction set.

I second this, would save a lot of headache for me

1 Like

That is not going to happen any time soon.

Exactly the issue. We are not talking about CPUs but CPU, GPUs and specialized hardware like the Hailo-8. Even on CPUs you have different instruction sets and extensions like AVX and NEON.

While this might be inconvenient. It also allows to innovate and find unique solutions. And it keeps us engineers employed. Would you still work on AI if it was simple or would you want to work on the next big thing. :slight_smile:

That is the price to pay, if you want to run a NN on a low power, high efficiency chip and at the price of a Hailo-8 over an expensive GPU. That may not always be the economic thing to do. If you only have one system and plenty of space you are better off paying for an expensive GPU. Once you want to scale, paying for extra software development pays off many times in reduced hardware cost.