Hi @kryptonicrevolution
I have compiled the yolov8s pt file to hef.
drone hef
The script that I used is straightforward to automate. Here is the script
import argparse
from hailo_sdk_client import ClientRunner
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Run backend test')
parser.add_argument('--device-type', choices=["hailo8", "hailo8r", "hailo8l", "hailo15h", "hailo15m", "hailo15l", "hailo10h"], default="hailo8", help='Device types array')
parser.add_argument('--model-path', type=str, help='Device types array')
parser.add_argument('--output-path', type=str, default=None, help='Device types array')
parser.add_argument('--end-node-names', nargs='+', help='array of output/end node names')
parser.add_argument('--calibration-npy-path', type=str, default="compiler/runtime/hailo_calibration_data.npy", help='Image width')
parser.add_argument('--optimization-level', type=int, default=0, help='Optimization level')
parser.add_argument('--compression-level', type=int, default=0, help='Compression level')
parser.add_argument('--compiler-optimization-level', type=int, default=2, help='Compiler optimization level -- max = 2')
args = parser.parse_args()
runner = ClientRunner(hw_arch=args.device_type)
# extract extension from model path
ext = args.model_path.rsplit('.', maxsplit=1)[-1]
model_name = args.model_path.split('/')[-1].split('.')[0]
if args.output_path is None:
# replace extension with .hef
args.output_path = args.model_path.replace(f".{ext}", ".hef")
if ext == "onnx":
hn, npz = runner.translate_onnx_model(
model=args.model_path,
net_name=model_name,
end_node_names=args.end_node_names,
)
elif ext == "tflite":
hn, npz = runner.translate_tf_model(
model_path=args.model_path,
net_name=model_name,
end_node_names=args.end_node_names,
)
batch_size = 32
alls_lines = [
"normalization_in = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n",
f"model_optimization_flavor(optimization_level={args.optimization_level}, compression_level={args.compression_level}, batch_size={batch_size})\n",
f"performance_param(compiler_optimization_level={args.compiler_optimization_level})\n",
]
runner.load_model_script("".join(alls_lines))
runner.optimize_full_precision(args.calibration_npy_path)
runner.optimize(args.calibration_npy_path)
hef = runner.compile()
with open(args.output_path, "wb") as f:
f.write(hef)
However, the Yolov8 postprocessor is not part of it.
When I tried to compile it with the postprocessor, I got this error.
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Cannot infer bbox conv layers automatically. Please specify the bbox layer in the json configuration file.
which means NMS needs a config JSON similar to the following link which we have not yet integrated into our compile process
https://github.com/hailo-ai/hailo_model_zoo/blob/9f1bb27570757e6398a6bfe44545e5f02a26e017/hailo_model_zoo/cfg/postprocess_config/yolov8s_bbox_decoding_only_nms_config.json
At this point, I tested the model in our cloud inference and it works.
Here is a link to try the cloud inference.
At this point, the model provides 6 outputs, 3 bounding box heads, and 3 corresponding class probabilities. Given these outputs, a yolov8 postprocessor is needed to provide the final detection results.
I hope this will help.