its right on the google colab where after you conver can you see this code
this is how you infer the .har hailo model format
its right on the google colab where after you conver can you see this code
this is how you infer the .har hailo model format
for this im not sure because first it is pose estimation but we are dealing with obb for now maybe we have to work with the pre processing once the .pt output and our .har output are similar then we can compile that is as far as i know on how it works after that everythin messed up so hope the develop or community can help me on that case
yes i manage to convert my OBB model to .hef
i think the optimization you can follow the hailo tutorial script where the preprocessing step and the output layer selection but for the obb there will be 9 conv output layet instead of 6 conv because the 6 conv will be for yolo and the 9 conv ouput layer is for obb
once you done with calibration the .har file will be embedded with Scale and Zero-Point Values . Then what i did was dont apply hailo nms just output the raw data and apply your nms/postprocessing in cpu or your host
correct me if im wrong @omria
You’re right @SAN - the main difference between a regular model and an OBB model is the additional output values needed for the rotation angle.
Model Parsing: OBB models typically have more output layers (e.g., 9 conv layers vs. 6 for standard YOLO). Use --end-node-names with the hailo parser onnx command to specify all relevant output layers before NMS.
Post-processing: For now, skip Hailo’s NMS by omitting nms_postprocess from your .alls file and handle NMS on the host with your own implementation. Note that this may introduce format conversion layers, so consider using bbox_decoding_only=True or accessing raw outputs via the Stream interface to maintain performance.
Workflow: The standard Hailo workflow applies - parse with correct end nodes, calibrate (which generates quantization parameters in the .har file), and compile to .hef.
Since OBB support is still in development, we’re also looking into adding native OBB NMS support in the compiler.
and i think i forgot to dequantize right @omria not sure is there any operator for dequantizing the output
Thank you for your help!
I tried to parse through using the correct end nodes, and I still get this error. Is my syntax incorrect?
im not sure about this error but i believe,
The ONNX model might have unsupported layers or structure, or specified end-node names may not match the actual graph nodes. Could you recheck using netron Model visualizer