I want to evaluate the accuracy of my custom YOLO model using the Hailo emulator. I’m following the tutorial and trying to get the model’s accuracy after the first .har conversion (before optimization) by calling the native emulator.
My question is:
I already have the inference results from the native emulator over a test dataset:
with runner.infer_context(InferenceContext.SDK_NATIVE) as ctx:
native_res = runner.infer(ctx, image_dataset_normalized[:IMAGES_TO_VISUALIZE, :, :, :])
Does the emulator include a function to directly calculate the model’s accuracy based on ground truth labels, or is there example code for doing so?
Thanks!
Hi @dgarrido,
For evaluation, you can use the ModelZoo tool hailomz eval
.
Hi, thanks for the quick response. Could you provide me with an example? I would like to evaluate the accuracy of a custom YOLO model on my evaluation dataset, both before and after quantization, to determine how much the performance has been affected by the quantization process.
The flow would be something like this:
-
Install the MZ in your DFC venv
-
Create a tfrecord of your dataset using this script:
hailo_model_zoo/hailo_model_zoo/datasets/create_coco_tfrecord.py at master · hailo-ai/hailo_model_zoo · GitHub
-
Run the evaluation with something like:
hailomz eval NET_NAME --har HAR_PATH --data-path TFRECORD_PATH
For more info on the args, you can run:
hailomz eval -h
You can select quantized with --target emulator
, and native with --target full_precision