RPi5 minimal code for python project integration

I have the RPi5 and the Raspberry Pi AI HAT+ 16 TOPS version. As a starter I tried to run RPi5 examples and the Code examples. I also managed to successfully build the pyhailort from source in a python virtual enviroment. But the examples contained quite complex code and my question is: Can I find somewhere a documentation with basic python functions or just a minimal code for pyhailort? I would like to integrate it to a bigger python project for object detection or image classification. Or are also other model types supported? Please excuse me if I misinterpreted any of the functions or abilities of these devices. Thank in advance.

Hi @0LI,

Welcome to the Hailo Community!

Absolutely, you can find our basic inference implementation in our utils.py file here: Hailo-Application-Code-Examples/runtime/python/utils.py at main · hailo-ai/Hailo-Application-Code-Examples · GitHub

This utility file serves as a foundation and is used across most of our example applications.

Please let me know if you need any clarification or additional assistance!

Hi @0LI,
We developed a Python SDK to simplify integration of Hailo8/Hailo8L into edge AI applications. Instructions to get started along with tutorials are at this repo DeGirum/hailo_examples. Let us know if you need further help in getting started.

1 Like

Thank you @omria @shashi,
I will take a look on both possibilities and can you please give me an example how to use the https://github.com/hailo-ai/Hailo-Application-Code-Examples/blob/main/runtime/python/utils.py file? Thank in advance.

I took a look at https://github.com/DeGirum/hailo_examples and I successfully installed it but my question is: can I load a local model (I want to have my project working offline)? It will be very helpful. And my second question is: What is the minimal code to use https://github.com/hailo-ai/Hailo-Application-Code-Examples/blob/main/runtime/python/utils.py as a library (via import utils)? Thank in advance.

Hi @0LI
I can answer the parts related to DeGirum PySDK. We are glad to hear that you can install our package successfully. You can load a local model easily in PySDK. You can download the model folder from our AI hub and load from a local folder. We will add these instructions to our repo soon and keep you posted. In the meantime, it would be helpful if you can run our basic example and confirm it works for you.

I tried an example of DeGirum PySDK with this code:

import degirum as dg, degirum_tools

inference_host_address = "@local"
zoo_url = "degirum/models_hailort"

model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1"
image_source='../assets/ThreePersons.jpg'

model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url)

print(f" Running inference using '{model_name}' on image source '{image_source}'")
inference_result = model(image_source)

print('Inference Results \n', inference_result)

print("Press 'x' or 'q' to stop.")

with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result)

With the following output:

 Running inference using 'yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1' on image source '../assets/ThreePersons.jpg'
Inference Results 
 - bbox: [50.766868591308594, 11.557273864746094, 260.00616455078125, 422.25885009765625]
  category_id: 0
  label: person
  score: 0.9210436940193176
- bbox: [425.75750732421875, 20.109336853027344, 639.944091796875, 353.2565612792969]
  category_id: 0
  label: person
  score: 0.888812780380249
- bbox: [204.74891662597656, 45.846923828125, 453.3245544433594, 401.99920654296875]
  category_id: 0
  label: person
  score: 0.8193221092224121

Press 'x' or 'q' to stop.
qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in "/home/oli/hailo/degirum_env/lib/python3.11/site-packages/cv2/qt/plugins"

I also got a window with the image and boxes around objects, but I could not close the program with CTRL+C, CTRL+D, q, x or any other key combination - I had to close the whole terminal.

1 Like

Hi @0LI
Thanks for confirming that the basic example works. Did you click on the image window and then press x or q? Pressing x or q in the python notebook does not close the window (this is an opencv feature).

Thank you, it worked. And how can I extract data from inference_result (similar to inference_result.image_overlay)?

Hi @0LI
inference_result.results contains the dictionary of results. For example, if you inspect the inference_result.results for the above example, you should see something like this:

[{'bbox': [50.34862518310547,
   11.273429870605469,
   259.01898193359375,
   421.975830078125],
  'category_id': 0,
  'label': 'person',
  'score': 0.9210436940193176},
 {'bbox': [425.5379333496094, 20.12468719482422, 640.0, 353.85784912109375],
  'category_id': 0,
  'label': 'person',
  'score': 0.888812780380249},
 {'bbox': [217.6569366455078,
   45.100433349609375,
   453.69573974609375,
   402.4808654785156],
  'category_id': 0,
  'label': 'person',
  'score': 0.8193221092224121}]

You can then access individual results as needed. For example, you can do inference_result.results[0]['bbox'] and see

[50.34862518310547, 11.273429870605469, 259.01898193359375, 421.975830078125]

or len(inference_result.results) to see 3 and know that the model detected 3 objects.

1 Like

Thank you. I am reading the help(degirum) and there is mentioned a use case where I can run a local model on a local machine (hailo8l in my case). How should I modify my code?

Hi @0LI
Please check our repo now. I added a models folder and modified the tutorial to show how you can load model from a local folder instead of a cloud model zoo.

2 Likes

Thank you, this helped me a lot for now :+1:

Hi @0LI
Glad we could be of help. Please feel free to reach out if you need help in porting custom models or in application development. Our aim with PySDK is to make edge AI as simple as possible.

That was my next question. I explored the example models in the repo and if I understand it, the folder contains 3 files:

  1. the hef model
  2. a json file to convert label numbers to text
  3. a json file that contains some model specification

My question is, if I download a object detection or image classification model from the hailo model hub, from where will I get these two json files?

Hi @0LI
If you want to port your own model, you need to create these two files. the label file is generally easy to make as you should already know the classes predicted by the model. The model json file specifies the model path, pre-processing and post-processing parameters. For popular models like yolov8, you can just use our model json example as a starting point. All entries in json are self-explanatory but we will soon publish a guide explaining the model porting process. Generally, the most difficult parts are related to postprocessing. For now, if you need help with a specific model, please let us know and we can prioritize that model.