Supported operators

Dear Hailo,

I used the RPI5, and I made a model using pytorch framework.
This model is slow but can be used for inference on RPI CPUs.

So I want to use Hailo-8L because it is slow, but I want to know if the operators I use are available on that NPU.
For example, is stft or GeLU activation function possible in Hailo-8L?

Does it matter if it’s PyTorch, TensorFlow or TFLite? Or can you check if it works?
And if it doesn’t work, is there a way to make it work?

Thank you!

Hey @whrltjr93 ,

Welcome to the Hailo Community!

Thank you for your question! Below is the information regarding the supported operators and frameworks for Hailo devices, specifically the Hailo-8L. This will address whether your model using STFT or GeLU activations will be compatible with the Hailo-8L NPU and provide potential workarounds if they are not natively supported.


1. Supported Operators on Hailo NPUs

Hailo devices, including the Hailo-8L, support a broad range of operations. As per the documentation:

  • Supported Activations:

    • Mish, Hard-Swish, SiLU
    • GeLU (preview)
    • PReLU, Tanh, Exp, and Sqrt
  • Supported Operators:

    • Convolutions, Depthwise Convolutions
    • Element-wise operations: Add, Subtract, Multiply, Divide
    • Pooling: Average Pooling (extended support)
    • Other operators: Expand, ReduceL2, Square, InstanceNormalization

Unfortunately, STFT (Short-Time Fourier Transform) is not listed among the natively supported operations. However, Hailo’s flexible compiler might allow for customization or preprocessing to transform operations that aren’t directly supported into compatible formats.


2. Framework Compatibility

Hailo devices are compatible with multiple frameworks, including:

  • TensorFlow / TensorFlow Lite
  • PyTorch
  • Keras and ONNX models

The framework you use (PyTorch, TensorFlow, or TFLite) won’t significantly affect compatibility, as long as the operations used are available after the model is converted and compiled using the Hailo Dataflow Compiler.


3. Workaround for Unsupported Operators

If certain operators like STFT are not natively supported, consider these options:

  1. Preprocess STFT off-chip: Perform the STFT on the CPU or a separate processing unit, and feed the transformed data into the Hailo NPU for further inference.

  2. Approximate Operators Using Supported Layers: GeLU, for example, can be approximated using other supported non-linear activations such as SiLU or Hard-Swish if required.

  3. Custom Models and Tuning: With Hailo’s Dataflow Compiler, there is room to optimize or adapt unsupported layers into compatible operations during compilation. You may experiment with ONNX conversion or replacing unsupported blocks using TensorFlow Lite’s operators.


4. Summary and Recommendation

  • GeLU is supported, though still in preview, meaning it is available but may undergo further optimization.
  • STFT is not currently supported natively on Hailo-8L. It is recommended to preprocess it outside the NPU.
  • Your use of PyTorch is fully supported, along with other frameworks such as TensorFlow and TFLite.

Let me know if further clarification is needed!

Best Regards,
Omri

Thank you for your kind response.

There are still parts that I don’t fully understand.
Is there a way to directly check for unsupported operators in a PyTorch (.th) model that I’ve created? For example, not just activation functions or STFT, but various PyTorch functions like view_as_complex or hann_window. Do I need to manually compare these with the functions listed in the Hailo Dataflow Compiler User Guide?

So, even though Hailo’s flexible compiler says it might allow transformations for unsupported operations, does that mean the user has to manually replace them with supported operators? It doesn’t automatically change unsupported operations into similar ones for compilation, right?

Lastly, if I were to preprocess the STFT outside the NPU, would the following approach be correct?
Currently, I’m using an RPI5, and my plan is to first perform the STFT transformation outside the NPU. Then, I’ll create a new model that takes the STFT-transformed data as input, run that model on the NPU, and finally, apply the inverse STFT (ISTFT) as post-processing. Is this the correct approach? Thank you.

Is there a way to request the addition of currently unsupported operators (while using the same hardware)? If they are added to the compiler, does that mean the NPU device’s firmware would also need to be updated?

Since it’s my first time using an NPU, there are many things I don’t know, which is why I have so many questions.

Have a great day!

Best Regards,
Kiseok

Hey @whrltjr93

Checking Operator Compatibility

  1. Use Hailo Parser to identify unsupported operators:
hailo parser onnx --hw-arch hailo8l model.onnx --verbose
  1. Alternatively, use ONNX Runtime for compatibility verification
  2. Use PyTorch Hooks to trace operators during forward pass

Handling Unsupported Operations

  • No automatic operator transformation is available in Hailo DFC
  • Options for unsupported operators (like STFT or view_as_complex):
    1. Preprocess externally
    2. Replace manually with supported operators
    3. Use CPU for pre/post-processing steps

Recommended Workflow for STFT

  1. Preprocess: Run STFT on CPU (e.g., RPI5)
  2. Inference: Process transformed data on Hailo NPU
  3. Postprocess: Apply ISTFT on CPU

Important Notes

  • GeLU is currently in preview mode
  • New operator support requires firmware and compiler updates
  • Submit feature requests through Hailo support for needed operators

This approach lets you effectively use the Hailo-8L while managing unsupported operations. Need any clarification on specific parts?