I am quite new to the topic of AI with Hailo, so this is probably a rooky question.
My project needs to distinguish different bolt heads (Round, Hex, Hexwasher, etc.) so I trained a YOLOv8n model with some images of my application. The trained model performs as expected. Now, I would like to run this custom trained model on a Hailo8L accelerator. I found this guide online: Tutorial of AI Kit with Raspberry Pi 5 about YOLOv8n object detection | Seeed Studio Wiki
What I don’t understand, why do I need the Coco dataset for optimizing and compiling the Hailo binary? Should I not provide my custom dataset?
Yes, you need to use a set of images that are representative of your dataset for the optimization. The number of images depend on the optimization level.
Thanks to KlausK I think I got on the right track. I was able to create a TensorFlow record from my custom training images by utilizing the provided create_coco_tfrecord.py.
However, when I run the optimization with hailomz I get a ValueError:
ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.
Call arguments received by layer "yolov8_nms_postprocess" (type HailoPostprocess):
• inputs=['tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)', 'tf.Tensor(shape=(None, 80, 80, 8), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 8), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 8), dtype=float32)']
• training=False
• kwargs=<class 'inspect._empty'>
I already checked my TF records, it seems to be fine. Any pointers to the root cause are appreciated.
Hi @Todor_Krastev
I hope you are doing well,I am also trying to have my custom yolo model that detects whether person wore mask or not.
What I am thinking is to create the same dataset what .tfrecord file maker code acceps right but my datser.
And I will continue withe the optimization process right.
Is that the correct flow