Running the hailo model zoo project

Hi Folks,
I am trying to run the hailo model zoo project but I need the data flow compiler first as it stated here:

Where can we find it?

Thanks,
David

1 Like

Hi @dudibs,

All our Packages are available in the Developer Zone: https://hailo.ai/developer-zone/software-downloads/

Regards,
Omri

@dudibs Please note that the Data Flow Compiler (DFC) is not yet available to Community users. We are actively working on its release and ensuring it is well-documented for easy use.
Stay tuned for updates.
I apologize for the inconvenience and appreciate your patience.

ok thanks.
Can I install the model zoo project and use it without the compiler?

Hi,
You can’t install it w/o the DFC. However you can download our pre-compiled networks from the model zoo GitHub.
If you have a specific network you want to use start with the pre-compiled version. We will open DFC soon.
Thanks you for your patience and understanding.

1 Like

Hi Gilad,
Thanks for the response.

I am managing to run the stuff from hailo application code examples. Work is smooth. However, I wish to use the compiled segformer which exists in the model zoo. available here:

https://github.com/hailo-ai/hailo_model_zoo/blob/master/docs/public_models/HAILO8/HAILO8_semantic_segmentation.rst

It is not in the TAPPAS but I wish to interact with it. To send an image to the model and get the result…get the sense of accuracy and inference time.

How can I interact with this compiled model (segformer_b0_bn)l. Is there any instructions on how to build the application around this model so I can infer from it?

In the model-zoo there’s also a link from each model the repo it originated from, this is a good starting point.
Alternatively, you can implement a pipeline for the SegFormer based on the TAPPAS GStreamer plugins. There are general instrcutions on this link.
This would include writing/translating the post-processing function for the SegFormer either from the model-zoo or from the original git.

Hi,
Please note that if you are using H8L (RPi AI kit) you’ll need to download the H8L version available here:

To check the inference time on you system you can use the hailortcli tool.
For example (Running on my laptop NOT RPi!!)

hailortcli run '/home/giladn/Downloads/segformer_b0_bn.hef' 
Running streaming inference (/home/giladn/Downloads/segformer_b0_bn.hef):
  Transform data: true
    Type:      auto
    Quantized: true
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
Network segformer_b0_bn/segformer_b0_bn: 100% | 38 | FPS: 7.59 | ETA: 00:00:00
> Inference result:
 Network group: segformer_b0_bn
    Frames count: 38
    FPS: 7.59
    Send Rate: 95.51 Mbit/s
    Recv Rate: 31.84 Mbit/s

You can run with a different batch size to gain more performance:

hailortcli run '/home/giladn/Downloads/segformer_b0_bn.hef' --batch-size 3
Running streaming inference (/home/giladn/Downloads/segformer_b0_bn.hef):
  Transform data: true
    Type:      auto
    Quantized: true
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
[HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance.
Network segformer_b0_bn/segformer_b0_bn: 100% | 54 | FPS: 10.78 | ETA: 00:00:00
> Inference result:
 Network group: segformer_b0_bn
    Frames count: 54
    FPS: 10.79
    Send Rate: 135.72 Mbit/s
    Recv Rate: 45.24 Mbit/s

For post processing you can checkout the model zoo post process hailo_model_zoo/core/postprocessing/segmentation_postprocessing.py

thanks. I will do so.

How do I know if i have the regular H8 unit. or the H8L.

running lspci provides this info about the unit itself:

57:00.0 Co-processor: Hailo Technologies Ltd. Hailo-8 AI Processor (rev 01)

i think its not the L version. right?

Use this command:
hailortcli fw-control identify

thanks. It’s a regular 8.

I will try to add the post-process aspects of the compiled segformer to the TAPPAS…is it doable task by someone unfamiliar with hailo software such as myself?

$ hailortcli fw-control identify
Executing on device: 0000:57:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.17.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: HLLWMB0214600587
Part Number: HM218B1C2LA
Product Name: HAILO-8 AI ACCELERATOR M.2 B+M KEY MODULE

Why not? Give yourself aome credit👌

@dudibs Take the semantic segmentation post process in the TAPPAS as a starting point. tappas/core/hailo/libs/postprocesses/semantic_segmentation/semantic_segmentation.cpp

You can use our semantic segmentation app from an older TAPPAS version for pipeline example:
https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh

You will need to change the output layer in the post process function. (add the segformer_b0_bn prefix)
TIP: hailortcli parse-hef segformer_b0_bn.hef

To recompile TAPPAS postprocesses see instructions in tappas/docs/write_your_own_application/write-your-own-postprocess.rst at 4341aa360b7f8b9eac9b2d3b26f79fca562b34e4 · hailo-ai/tappas · GitHub

I’d be happy to get updates on your progress!

Thanks for that info Gilad. I have tried to follow your instructions but I am stuck.
Thus, My plan now is to:

  1. first make the tappas up and running and get the feeling of how the current semantic segmentation model which exists there is working.
  2. add the segformer to it.
  3. recompile the postprocesses tappas .
    4 use the segformer out from the tappas.
    Can you please instruct me on how to achieve step 1. Do you have any step by step instructions on how to install the tappas and use it?

Thanks
David

I am trying to run the tappas first, to get the feeling opf how it works. for instance operate the current that works there.

Hi Gilad,
I have installed the tappas via the docker and it is working fine. I can run all the code and applications.
But folder segmentation as exists in the github [https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh](https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh)

is not exists in the installation\container of the docker.
this is the versions i installed :
tappas_3.28.0_ubuntu22_docker_x86_64

from the resources page.
From which docker tappas I can get the segmentation example up and running?
Dudi

Hi it was removed in newer versions.
You can take the code from our GitHub with the link I sent above.
Just copy the segmentation directory to your new docker. Note that it will not include the hef file which is downloaded as part of the installation flow.
You can download it from our model zoo. Or try running directly with the segformer. You will have to change the hef path and the input layer as described above.

Hi Gilad,
Before i will try to integrate the segformer, I am trying first to integrate your old semantic segmentation app to the tappas.

  1. I installed the tappas from here - https://github.com/hailo-ai/tappas/blob/v3.26.2/docs/installation/manual-install.rst manually (not docker) - everything works fine.
  2. I took the old directory of the semantic segmentation app from the link you provide me : [https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh](https://github.com/hailo-ai/tappas/blob/v3.26.2/apps/h8/gstreamer/general/segmentation/semantic_segmentation.sh)
  3. I added to the resources directory of the semantic sehemnation directory the hef file of the model and also an mp4 file i wish to run.
  4. I fixed the ``semantic_segmentation.sh`` to include the right path to the hef file and to te mp4 file. I am the fcn8_resnet_v1_18.hef model from the model zoo.
  5. Then I added all this directory (basically containing the running script and the resources directory containing the mp4 file and the model hef file) to the directory of the tappas /tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation/
  6. I check that i have the right post process files and info for this semantic segmentation and everything looks fine. i.e the file libsemnatic_segmentation.so exists in : tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/libs/post_processes
  7. When running the ./semantic_segmenation.sh i am getting this:
(mytappas) **nuchailo1@nuchailo1**:**~/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation**$ ./semantic_segmentation.sh 
Running
gst-launch-1.0 filesrc location=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation/resources/3029469-hd_1920_1080_24fps.mp4 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation/resources/fcn8_resnet_v1_18.hef ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter so-path=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/libs/post_processes//libsemantic_segmentation.so qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Redistribute latency...
terminate called after throwing an instance of 'std::invalid_argument'
  what():  No tensor with name argmax1
./semantic_segmentation.sh: line 94: 13103 Aborted (core dumped) gst-launch-1.0 filesrc location=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation/resources/3029469-hd_1920_1080_24fps.mp4 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/general/segmentation/resources/fcn8_resnet_v1_18.hef ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter so-path=/home/nuchailo1/Documents/tappas_manual/tappas_v3.28.0/apps/h8/gstreamer/libs/post_processes//libsemantic_segmentation.so qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false

what am i doing wrong?

can we have a short phone call?