Can I make use of the RPI5 Hailo NPU with other libraries other than HAILO's own?

So I am a newbie on the subject. But I learn more and more each day and have started to understand a bit about the different libraries out there and the usage of different models. What I’m asking is, is it possible to make use of the NPU with RPi5 when I write a script using dlib or arcface or whatever possible models frameworks?

I don’t know how to make sure that the processing is done by the NPU and not the CPU on the RPi5. My question might be strangely formulated but I hope you guys understand what I’m asking.

Do I have to some how make my programs compliant with the Hailo NPU?

Hey @oktober.yildiz,

Here’s a quick guide for using models with Hailo:

  1. Get Your Model Ready

    • Option 1: Use Pre-compiled Models

      • Visit Hailo Model Zoo
      • Choose from optimized models in HEF format
      • Includes popular architectures for:
        • Object detection
        • Classification
        • Segmentation
        • Pose estimation
    • Option 2: Compile Your Own Model

      • Supported frameworks: TensorFlow, PyTorch, ONNX
      • Use Hailo’s Data Flow Compiler (DFC) to convert to HEF format
      • Ensures optimization for Hailo hardware
  2. Run Inference (3 Methods)

    • Python API: Best for quick prototyping and development
    • C++ API: Optimal for production and performance
    • GStreamer pipelines: Great for video processing pipelines

Check out these examples for implementation:

Would you like more details about any specific part?

I think it’s very clear, thank you for your answer.

Since you asked me I thought I’ll take the opportunity to ask about specifics.

Do I have to use the models under [hailo_models(hailo_model_zoo/hailo_models at master · hailo-ai/hailo_model_zoo · GitHub) folder? Or can I use the ones under the training folder, what differs these?

Or can I use any model? :slight_smile:

Essentially you can use any model you want; you just need to compile it to the HEF format (Hailo Executable File). We provide ready-to-use models, and the training folder is there to help if you want to retrain a model with different datasets or classes. Alternatively, you can also use a fully custom model.

1 Like

So, if I run this compiled to the HEF format, the RPi should recognize this and let the AI HAT+ do the inference or do I need to set a bunch of environment variables?s

You can check out our example code to see how we handle both the inference and post-processing steps. It’ll give you a good idea of the proper way to run everything.

1 Like

So I have the AI HAT+, which is the 26 TOPS one I’m guessing and it is integrated to the board. Now I’m trying to create this development environment on my host machine where I do not have access to any HAILO PCIe chip. And it’s a bit confusing.

The instructions fails, for example when I try to install the dataflow compiler:

Building wheels for collected packages: pwlf, pygraphviz, pyDOE
  Building wheel for pwlf (setup.py) ... done
  Created wheel for pwlf: filename=pwlf-2.3.0-py3-none-any.whl size=16657 sha256=847fce03d2b8672fd56ef0e56e9c33be8f957033d68f903857978e8e50251f05
  Stored in directory: /home/ddkrd/.cache/pip/wheels/83/da/81/eb904e6d4045d0fd6922019cae45e52e83e475a06df5e7f9ae
  Building wheel for pygraphviz (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for pygraphviz (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [58 lines of output]
      running bdist_wheel
      running build
      running build_py
      creating build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/testing.py -> build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/agraph.py -> build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/graphviz.py -> build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/__init__.py -> build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/scraper.py -> build/lib.linux-x86_64-cpython-310/pygraphviz
      creating build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_readwrite.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_repr_mimebundle.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_subgraph.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/__init__.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_drawing.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_html.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_node_attributes.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_unicode.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_close.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_attribute_defaults.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_edge_attributes.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_clear.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_string.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_layout.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_scraper.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      copying pygraphviz/tests/test_graph.py -> build/lib.linux-x86_64-cpython-310/pygraphviz/tests
      running egg_info
      writing pygraphviz.egg-info/PKG-INFO
      writing dependency_links to pygraphviz.egg-info/dependency_links.txt
      writing top-level names to pygraphviz.egg-info/top_level.txt
      reading manifest file 'pygraphviz.egg-info/SOURCES.txt'
      reading manifest template 'MANIFEST.in'
      warning: no files found matching '*.swg'
      warning: no files found matching '*.png' under directory 'doc'
      warning: no files found matching '*.html' under directory 'doc'
      warning: no files found matching '*.txt' under directory 'doc'
      warning: no files found matching '*.css' under directory 'doc'
      warning: no previously-included files matching '*~' found anywhere in distribution
      warning: no previously-included files matching '*.pyc' found anywhere in distribution
      warning: no previously-included files matching '.svn' found anywhere in distribution
      no previously-included directories found matching 'doc/build'
      adding license file 'LICENSE'
      writing manifest file 'pygraphviz.egg-info/SOURCES.txt'
      copying pygraphviz/graphviz.i -> build/lib.linux-x86_64-cpython-310/pygraphviz
      copying pygraphviz/graphviz_wrap.c -> build/lib.linux-x86_64-cpython-310/pygraphviz
      running build_ext
      building 'pygraphviz._graphviz' extension
      creating build/temp.linux-x86_64-cpython-310/pygraphviz
      x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DSWIG_PYTHON_STRICT_BYTE_CHAR -I/home/ddkrd/.virtualenvs/HAILOVENV/include -I/usr/include/python3.10 -c pygraphviz/graphviz_wrap.c -o build/temp.linux-x86_64-cpython-310/pygraphviz/graphviz_wrap.o
      pygraphviz/graphviz_wrap.c:9: warning: "SWIG_PYTHON_STRICT_BYTE_CHAR" redefined
          9 | #define SWIG_PYTHON_STRICT_BYTE_CHAR
            |
      <command-line>: note: this is the location of the previous definition
      pygraphviz/graphviz_wrap.c:3023:10: fatal error: graphviz/cgraph.h: No such file or directory
       3023 | #include "graphviz/cgraph.h"
            |          ^~~~~~~~~~~~~~~~~~~
      compilation terminated.
      error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for pygraphviz
  Building wheel for pyDOE (setup.py) ... done
  Created wheel for pyDOE: filename=pyDOE-0.3.8-py3-none-any.whl size=18185 sha256=d22eba9fcd30360750c2fa5f4b3e4b566e8a5139ee5006110278d3fe4905b102
  Stored in directory: /home/ddkrd/.cache/pip/wheels/ce/b6/d7/c6b64746dba6433c593e471e0ac3acf4f36040456d1d160d17
Successfully built pwlf pyDOE
Failed to build pygraphviz
ERROR: Could not build wheels for pygraphviz, which is required to install pyproject.toml-based projects

I am not dumb but I am starting to feel like it.

Linux pop-os 6.9.3-76060903-generic #202405300957~1726766035~22.04~4092a0e SMP PREEMPT_DYNAMIC Thu S x86_64 x86_64 x86_64 GNU/Linux
ddkrd@pop-os:~/Projects/HAILO-AI/hailo_model_zoo$ which python3.10 /usr/bin/python3.10

Is there something I am missing? Does the Nvidia driver version HAVE to be 525? or can it be 525+?

The docker container variant with the software suite states that there can be no re-training done within it, so I am assuming that I need the software suite on my host machine, which is failing.

I’d like to point out that following this instruction works when installing the dataflow compiler Dataflow Compiler v3.28.0

I’m wondering, since I’ve never had a proper development environment, none the less one that needs so many dependencies as this one. I am starting to feel a bit frustrated about myself. Does the docker container that you provide suffice? I’ve now installed ubuntu 22.04 on my host system but installing the exact nvidia driver version of 525 seems to be impossible since the nvidia-driver-525 for example doesn’t contain any files, it just upgrades to 535. Which is making me lose so much valuable time instead of attempting to “develop” I am now stuck in dependency/pre-requisite-hell.

I would appreciate some pointers here please

Hey @oktober.yildiz ,

Regarding our strategy for using DFC and the development environment for inference on the Raspberry Pi (RPi) , DFC is x86-specific and cannot be used on the RPi itself. Your development workflow should involve using DFC on an x86 machine, either in a Docker container or directly on the host, and then deploying the application to the RPi for inference.

For more info please check out :