I ran the commands below to create the dataset required for evaluation:
(hailo_virtualenv) hailo@citi-ai:/local/workspace/hailo_model_zoo/hailo_model_zoo$ python datasets/create_coco_tfrecord.py val2017
(hailo_virtualenv) hailo@citi-ai:/local/workspace/hailo_model_zoo/hailo_model_zoo$ python datasets/create_coco_tfrecord.py calib2017
Now I have this:
(hailo_virtualenv) hailo@citi-ai:/local/workspace/hailo_model_zoo/hailo_model_zoo$ ls /local/shared_with_docker/.hailomz/models_files/coco/2023-08-03/
coco_calib2017.tfrecord coco_val2017.tfrecord instances_val2017.json person_keypoints_val2017.json
Though when I run the following as directed here
hailomz eval yolov11n
I get this error:
<Hailo Model Zoo INFO> Start run for network yolov11n ...
<Hailo Model Zoo INFO> Initializing the runner...
<Hailo Model Zoo INFO> Chosen target is full_precision
[info] Translation started on ONNX model yolov11n
[info] Restored ONNX model yolov11n (completion time: 00:00:00.03)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.16)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.23/cv3.0/cv3.0.2/Conv /model.23/cv2.0/cv2.0.2/Conv /model.23/cv3.1/cv3.1.2/Conv /model.23/cv2.1/cv2.1.2/Conv /model.23/cv2.2/cv2.2.2/Conv /model.23/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'yolov11n/input_layer1'.
[info] End nodes mapped from original model: '/model.23/cv2.0/cv2.0.2/Conv', '/model.23/cv3.0/cv3.0.2/Conv', '/model.23/cv2.1/cv2.1.2/Conv', '/model.23/cv3.1/cv3.1.2/Conv', '/model.23/cv2.2/cv2.2.2/Conv', '/model.23/cv3.2/cv3.2.2/Conv'.
[info] Translation completed on ONNX model yolov11n (completion time: 00:00:00.62)
[info] Appending model script commands to yolov11n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_tutorials/notebooks/yolov11n.har
<Hailo Model Zoo INFO> Preparing calibration data...
Traceback (most recent call last):
File "/local/workspace/hailo_virtualenv/bin/hailomz", line 33, in <module>
sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 122, in main
run(args)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 111, in run
return handlers[args.command](args)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 388, in evaluate
runner = _ensure_runnable_state_tf2(args, logger, network_info, runner, target)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 175, in _ensure_runnable_state_tf2
calib_feed_callback = prepare_calibration_data(
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 310, in prepare_calibration_data
calib_feed_callback = make_calibset_callback(network_info, preproc_callback, calib_path)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 271, in make_calibset_callback
data_feed_cb = _make_dataset_callback(network_info, preproc_callback, calib_path, dataset_name)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 248, in _make_dataset_callback
return _make_data_feed_callback(
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 227, in _make_data_feed_callback
raise FileNotFoundError(f"Couldn't find dataset in {data_path}. Please refer to docs/DATA.rst.")
FileNotFoundError: Couldn't find dataset in /local/shared_with_docker/.hailomz/models_files/coco/2021-06-18/coco_calib2017.tfrecord. Please refer to docs/DATA.rst.
ok not the end of the world, I just change the hardcoded 2021-06-18
version-date directory to the correct one and move on, right?
mv /local/shared_with_docker/.hailomz/models_files/coco/2023-08-03/ /local/shared_with_docker/.hailomz/models_files/coco/2021-06-08/
hailomz eval yolov11n
Wrong. Same issues due to a hardcoded version-date directory in a different location:
<Hailo Model Zoo INFO> Start run for network yolov11n ...
<Hailo Model Zoo INFO> Initializing the runner...
<Hailo Model Zoo INFO> Chosen target is full_precision
[info] Translation started on ONNX model yolov11n
[info] Restored ONNX model yolov11n (completion time: 00:00:00.03)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.16)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.23/cv3.0/cv3.0.2/Conv /model.23/cv2.0/cv2.0.2/Conv /model.23/cv3.1/cv3.1.2/Conv /model.23/cv2.1/cv2.1.2/Conv /model.23/cv2.2/cv2.2.2/Conv /model.23/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'yolov11n/input_layer1'.
[info] End nodes mapped from original model: '/model.23/cv2.0/cv2.0.2/Conv', '/model.23/cv3.0/cv3.0.2/Conv', '/model.23/cv2.1/cv2.1.2/Conv', '/model.23/cv3.1/cv3.1.2/Conv', '/model.23/cv2.2/cv2.2.2/Conv', '/model.23/cv3.2/cv3.2.2/Conv'.
[info] Translation completed on ONNX model yolov11n (completion time: 00:00:00.63)
[info] Appending model script commands to yolov11n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_tutorials/notebooks/yolov11n.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to yolov11n from /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov11n.alls
<Hailo Model Zoo INFO> Initializing the dataset ...
Traceback (most recent call last):
File "/local/workspace/hailo_virtualenv/bin/hailomz", line 33, in <module>
sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 122, in main
run(args)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 111, in run
return handlers[args.command](args)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 403, in evaluate
return infer_model_tf2(
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 514, in infer_model_tf2
dataset = make_evalset_callback(network_info, preproc_callback, data_path)
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 256, in make_evalset_callback
data_feed_cb = _make_dataset_callback(
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 248, in _make_dataset_callback
return _make_data_feed_callback(
File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 227, in _make_data_feed_callback
raise FileNotFoundError(f"Couldn't find dataset in {data_path}. Please refer to docs/DATA.rst.")
FileNotFoundError: Couldn't find dataset in /local/shared_with_docker/.hailomz/models_files/coco/2023-08-03/coco_val2017.tfrecord. Please refer to docs/DATA.rst.
Why are these hardcoded version directories required? Is there something I’ve done incorrectly here that’s resulted in the different locations?