Data flow compiler for custom trained network

Hello there,

I am receiving the RPi5 AI kit in the following days and wanted to get an overview on how to run (custom) models on the AI kit.

From the documentation I understood, that a hef file format is required and it can be compiled using the dataflow compiler. But I cannot find it anywhere…

Where can I get the compiler to transform my custom network?

Many thanks

Marcus

2 Likes

Hi @emasty, I suggest to start from this post:
Getting Started with RPI5-Hailo8L - General - Hailo Community

I did notice the topic but are wondering what solutions there are.

Offering an AI accelerator without the capability to run anything except a few pretrained vision based networks…it is hard for me to understand the usecase and intention.

I figured out that the model zoo whl* includes some kind of compiler for limited vision based models. So far I understood you can finetune the networks and compile them, even though it clearly mentions that you need the data flow compiler in the requirements. It looks like you don’t actually need it for some of the networks.

But still, beeing limited to a few selected networks for very specific use-cases defeats the purpose of the AI kit.

I just saw that there is a post mentioning that Hailo is about to work on a solution. Hopefully that is a matter of days and not months…

*you have to downgrade pip to 22.0.x to install the whl, otherwise it will fail (e.g. with the latest pip version)

3 Likes

Ok, unfortunately I noticed that even retraining networks covered by the model zoo doesn’t work.

The command hailomz is there, but it complains that the module hailo_sdk_common is not available. Which - I guess - is part of the dataflow compiler.

Is there any way to access the required packages to at least compile a network covered by the model-zoo?

As mentioned in this post the compiler is not released to general public.

Hi there,
did you manage to find the workflow compiler?
David

i am also have this sdk client issue while trying to run cd hailo_model_zoo; pip install -e .

My current knowledge is, that the dataflow compiler is unfortunately not available as of now.

You can run the pretrained networks from the model zoo. That’s about it. You can not get your retrained or your custom network running on the device as the format is proprietary and the compiler not provided.

Hailo seems to be working on it, but no word on timeline.

i want to run my custom onnx model on hailo. But not being able to get the hailomz i get this error in the installation of the model zoo raise ModuleNotFoundError("hailo_sdk_client was not installed or you are not "

Hi @basuroyrohan, have you instyalled the data-flow-compiler into the same virtual env.?

Hi,
I installed the Dataflow Compiler version 3.28 with TensorFlow2.4 and Pyton 3.9.
When I doing the example with following common:

runner = ClientRunner(hw_arch=chosen_hw_arch)
hn, npz = runner.translate_onnx_model(
onnx_path,
onnx_model_name,
start_node_names=[“input.1”],
end_node_names=[“191”],
net_input_shapes={“input.1”: [1, 3, 224, 224]},
)

It got Erro as follows:

NotFoundError Traceback (most recent call last)
Cell In[4], line 1
----> 1 runner = ClientRunner(hw_arch=chosen_hw_arch)
2 hn, npz = runner.translate_onnx_model(
3 onnx_path,
4 onnx_model_name,
(…)
7 net_input_shapes={“input.1”: [1, 3, 224, 224]},
8 )

File ~/anaconda3/envs/TensorFlow2.4/lib/python3.9/site-packages/hailo_sdk_client/runner/client_runner.py:130, in ClientRunner.init(self, hn, hw_arch, har)
127 self._sdk_backend = None
129 # Waiting for params
→ 130 HSimWrapper().load()
131 self._cached_model = None
132 self._sub_models = None

File ~/anaconda3/envs/TensorFlow2.4/lib/python3.9/site-packages/hailo_sdk_common/paths_manager/SimWrapper.py:15, in HSimWrapper.load(self)
13 def load(self):
14 if self._hsim is None:
—> 15 self._load()

File ~/anaconda3/envs/TensorFlow2.4/lib/python3.9/site-packages/hailo_sdk_common/paths_manager/SimWrapper.py:19, in HSimWrapper._load(self)
17 def _load(self):
18 hsim_path = SDKPaths().join_sdk_client(“emulator/emulator/lib/HSim.so”)
—> 19 self._hsim = tf.load_op_library(hsim_path)

File ~/anaconda3/envs/TensorFlow2.4/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py:54, in load_op_library(library_filename)
31 @tf_export(‘load_op_library’)
32 def load_op_library(library_filename):
33 “”“Loads a TensorFlow plugin, containing custom ops and kernels.
34
35 Pass “library_filename” to a platform-specific mechanism for dynamically
(…)
52 RuntimeError: when unable to load the library or get the python wrappers.
53 “””
—> 54 lib_handle = py_tf.TF_LoadLibrary(library_filename)
55 try:
56 wrappers = _pywrap_python_op_gen.GetPythonWrappers(
57 py_tf.TF_GetOpList(lib_handle))

NotFoundError: /home/brad/anaconda3/envs/TensorFlow2.4/lib/python3.9/site-packages/hailo_sdk_client/emulator/emulator/lib/HSim.so: undefined symbol: cudaPeekAtLastError

Is any body can give me any suggestion?

Hi @brad.pai,
Our supported flow is uing virtual-environments, and not conda. Can you use that? In addition, it seems that you’ve changed the TF version that the DFC comes with. The DFC depends on TF tightly, so you should stick to the version that comes with it.