I’ve been checking the hailort
library and some questions came:
-
Is it possible to define a custom hailort::net_flow::Op?
-
If it is, how do I know which target architecture to use when compiling the code – or is it just the host arch?
-
How would this new Op be accelerated by the device? Or is the acceleration due to running on a dedicated co-processor which doesn’t have to handle OS-level stuff?
For context:
I have a processing pipeline with Nx (GitHub - elixir-nx/nx: Multi-dimensional arrays (tensors) and numerical definitions for Elixir), which can compile Elixir (GitHub - elixir-lang/elixir: Elixir is a dynamic, functional language for building scalable and maintainable applications) down to StableHLO, and I also have access to a AOT compiler built on IREE.
The idea would be to provide a custom Op which ultimately receives as input the bytecode, metadata and input data for the iree runtime, and then returns the output back. And then I could call this hailort-based library from Elixir.