Running the hailo model zoo project

Hi Folks,
I am trying to run the hailo model zoo project but I need the data flow compiler first as it stated here:

Where can we find it?

Thanks,
David

1 Like

Hi @dudibs,

All our Packages are available in the Developer Zone: https://hailo.ai/developer-zone/software-downloads/

Regards,
Omri

@dudibs Please note that the Data Flow Compiler (DFC) is not yet available to Community users. We are actively working on its release and ensuring it is well-documented for easy use.
Stay tuned for updates.
I apologize for the inconvenience and appreciate your patience.

ok thanks.
Can I install the model zoo project and use it without the compiler?

Hi,
You can’t install it w/o the DFC. However you can download our pre-compiled networks from the model zoo GitHub.
If you have a specific network you want to use start with the pre-compiled version. We will open DFC soon.
Thanks you for your patience and understanding.

1 Like

Hi Gilad,
Thanks for the response.

I am managing to run the stuff from hailo application code examples. Work is smooth. However, I wish to use the compiled segformer which exists in the model zoo. available here:

https://github.com/hailo-ai/hailo_model_zoo/blob/master/docs/public_models/HAILO8/HAILO8_semantic_segmentation.rst

It is not in the TAPPAS but I wish to interact with it. To send an image to the model and get the result…get the sense of accuracy and inference time.

How can I interact with this compiled model (segformer_b0_bn)l. Is there any instructions on how to build the application around this model so I can infer from it?

In the model-zoo there’s also a link from each model the repo it originated from, this is a good starting point.
Alternatively, you can implement a pipeline for the SegFormer based on the TAPPAS GStreamer plugins. There are general instrcutions on this link.
This would include writing/translating the post-processing function for the SegFormer either from the model-zoo or from the original git.

Hi,
Please note that if you are using H8L (RPi AI kit) you’ll need to download the H8L version available here:

To check the inference time on you system you can use the hailortcli tool.
For example (Running on my laptop NOT RPi!!)

hailortcli run '/home/giladn/Downloads/segformer_b0_bn.hef' 
Running streaming inference (/home/giladn/Downloads/segformer_b0_bn.hef):
  Transform data: true
    Type:      auto
    Quantized: true
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
Network segformer_b0_bn/segformer_b0_bn: 100% | 38 | FPS: 7.59 | ETA: 00:00:00
> Inference result:
 Network group: segformer_b0_bn
    Frames count: 38
    FPS: 7.59
    Send Rate: 95.51 Mbit/s
    Recv Rate: 31.84 Mbit/s

You can run with a different batch size to gain more performance:

hailortcli run '/home/giladn/Downloads/segformer_b0_bn.hef' --batch-size 3
Running streaming inference (/home/giladn/Downloads/segformer_b0_bn.hef):
  Transform data: true
    Type:      auto
    Quantized: true
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
Network segformer_b0_bn/segformer_b0_bn: 100% | 54 | FPS: 10.78 | ETA: 00:00:00
> Inference result:
 Network group: segformer_b0_bn
    Frames count: 54
    FPS: 10.79
    Send Rate: 135.72 Mbit/s
    Recv Rate: 45.24 Mbit/s

For post processing you can checkout the model zoo post process hailo_model_zoo/core/postprocessing/segmentation_postprocessing.py

thanks. I will do so.

How do I know if i have the regular H8 unit. or the H8L.

running lspci provides this info about the unit itself:

57:00.0 Co-processor: Hailo Technologies Ltd. Hailo-8 AI Processor (rev 01)

i think its not the L version. right?

Use this command:
hailortcli fw-control identify

thanks. It’s a regular 8.

I will try to add the post-process aspects of the compiled segformer to the TAPPAS…is it doable task by someone unfamiliar with hailo software such as myself?

$ hailortcli fw-control identify
Executing on device: 0000:57:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.17.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: HLLWMB0214600587
Part Number: HM218B1C2LA
Product Name: HAILO-8 AI ACCELERATOR M.2 B+M KEY MODULE

Why not? Give yourself aome credit👌

@dudibs Take the semantic segmentation post process in the TAPPAS as a starting point. tappas/core/hailo/libs/postprocesses/semantic_segmentation/semantic_segmentation.cpp

You can use our semantic segmentation app from an older TAPPAS version for pipeline example:
https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh

You will need to change the output layer in the post process function. (add the segformer_b0_bn prefix)
TIP: hailortcli parse-hef segformer_b0_bn.hef

To recompile TAPPAS postprocesses see instructions in tappas/docs/write_your_own_application/write-your-own-postprocess.rst at 4341aa360b7f8b9eac9b2d3b26f79fca562b34e4 · hailo-ai/tappas · GitHub

I’d be happy to get updates on your progress!

Thanks for that info Gilad. I have tried to follow your instructions but I am stuck.
Thus, My plan now is to:

  1. first make the tappas up and running and get the feeling of how the current semantic segmentation model which exists there is working.
  2. add the segformer to it.
  3. recompile the postprocesses tappas .
    4 use the segformer out from the tappas.
    Can you please instruct me on how to achieve step 1. Do you have any step by step instructions on how to install the tappas and use it?

Thanks
David

I am trying to run the tappas first, to get the feeling opf how it works. for instance operate the current that works there.