I’m new to embedded development and have mainly worked on my computer using Python (VSCode). I’ve previously completed a project detecting and identifying animals in forests using YOLO and ResNet with a bit of finetuning with a custom dataset (made of youtube images).
But this time, I want to build an embedded AI module for detecting vehicles in infrared. I’m thinking about using Raspberry Pi 5 + Hailo AI Kit+ an infrared camera, like the Raspberry Pi Camera V3 NoIR.
Is it feasible to train a custom AI model (e.g., YOLO) and deploy it on this setup? Like I would like to create my own database or use image of my own database to help the model for detecting and identifying vehicles.
Has anyone worked on similar projects or have tips/resources to share?
Any guidance or resources would be greatly appreciated
Sounds like a great opportunity to learn a lot. I would recommend to get the kit and first play with the example applications.
YOLO is an excellent choice for your project. We offer robust support for YOLO with example applications, and the model is included in the Hailo Model Zoo. The YOLO models come with retraining Docker support, streamlining the process compared to using an unknown model.
We also have a webinar available in the Developer Zone that covers retraining. Although it’s a bit dated, most of the content is still relevant. There might be slight differences, but it will provide you with a solid starting point. You can access the webinar here:
Would you recommend me watching the webinar first to get an overview, or should I begin by navigating through the Hailo Model Zoo and experimenting with the example applications? What would be the best approach to get started efficiently?
I just came across an update regarding the Hailo Raspberry Pi AI Kit, including the release of Hailo Python API, picamera2 examples, and the CLIP zero-shot classification app.
However, I wanted to clarify something: Is it possible to fine-tune a model like YOLO with my own dataset and deploy it on the Hailo-8L AI processor via the Python API? Or would the fine-tuning need to be done externally (e.g., on a PC) before importing the model into the Hailo environment?
I also noticed that the Hailo Dataflow Compiler isn’t available to the general public, does that limit the ability to deploy fine-tuned models on Hailo hardware?
Thanks in advance for any insights you can provide
Have some fun with the example applications first. Understanding the end goal and having fun is as important as efficiency.
Yes, the retraining will be done on a PC with a good GPU. The model will be converted after the training into a HEF file to run inference on the Hailo-8.
You can download the Hailo AI Software Suite from the Developer Zone. You need to be signed in to be able to download the software.
I’m also a novice who’s lately been deploying computer vision models for some hobbyist projects. I’ll share what’s worked for me, but would love to hear any feedback about what I’m doing wrong or could be doing better.
Fine time a object detection model for my project.
in my case I wanted a squirrel detector. So I used this pre-existing Roboflow dataset and then created a simple Colab notebook to train my model. Colab offers free GPUs (tho not always available) and it’s easy to use.
I exported the model in ONNX format and downloaded it.
I don’t have a PC with a GPU, so I used a cloud computing platform to convert my YOLOV8n model from ONNX format to .hef, so it’ll run on my RPi 5 AI Kit.
After running into issues using the Hailo Docker image with a few different platforms, I got it to work using vast.ai. I created a new Virtual Machine, connected to it from my laptop via SSH, and then copied over the hailo_ai_sw_suite_2024-10_docker.zip file to the VM.
I ran into some issues getting the Docker stuff to work correctly, but eventually got it. I uploaded my ONNX model and a folder with ~1500 calibration images. And then used this command to create my .hef file: hailomz compile yolov8n --ckpt cats_yolov8n_11-21-v2.onnx --calib-path calibration/ --classes 3