Hello,
I have developed a project in Python that uses the Yolov8n.pt model to detect people and, based on their screen coordinates, returns their real-world coordinates according to a calibrated reference point.
I wanted to transfer the entire project to a Raspberry Pi 5, for which I purchased the Hailo 8L AI HAT accelerator.
Since this is my first experience with this type of technology, I have not been able to get it running on the RPI using Hailo – it works without Hailo but with significant delays.
Is there an easy way to convert Yolov8n.pt so that it utilizes the Hailo accelerator?
Can someone guide me on what steps I should take to make it work?
To get started, you’ll first need to convert your PyTorch (pt) model to ONNX or TensorFlow (tf) format. Once converted, use the Hailo Dataflow Compiler to parse, optimize, and compile your model into the .hef format (Hailo Executable File).
However, I encountered an issue when trying to convert to .hef because I don’t have the data training set images. I attempted to run the conversion using the --mode=raw flag, but it returns an error:
shell
Skopiuj kod
hailomz compile yolov8s --ckpt=yolov8s.onnx --hw-arch=hailo8l --classes=2 --mode=raw
FileNotFoundError: Couldn't find dataset in .hailomz/data/models_files/coco/2021-06-18/coco_calib2017.tfrecord. Please refer to docs/DATA.rst.
Does anyone have a .hef file with YOLOv8s that they could share? Or could someone guide me on how to handle the error mentioned above?
Note: Some datasets require manual download from the original source.
I also recommend to run trough the built in tutorials in the Hailo AI Software Suite Docker. Simply call the following command to start the Jupyter notebook server.