Hello. Please tell me how to run darknet elo on hailo-8?
The Darknet object detection framework is in the repository GitHub - hank-ai/darknet: Darknet/YOLO object detection framework. Darknet Yolo does not belong to ultralytics and has an Apache 2.0 license, which allows its use in commercial projects and does not require opening the created code.
I’ve trained this model and successfully used detection on my PC.
For inference this model used darknet python api whith files: .cfg (configuration file), .names (names of classes), .weights (weights of trained model).
Please help me launch this model on RPI5 Hailo-8. I don’t understand how it can be converted to . hef
I recommend you download and install the Hailo AI Software Suite Docker from the Hailo Developer Zone.
When inside the Docker start the tutorial by running the following command:
hailo tutorial
This will start a Jupyter Notebook server with notebooks for each individual step of the model convertion.
I’e already installed your Hailo AI Software Suite Docker befor creating this topic. I’ve train my own tensorflow model and have converted it on the to .hef to launch on the Hailo-8. But in the Hailo Dataflow Compiler
User Guide I did not find information on how to convert darknet yolo, because this is a different model and I do not understand how it can be converted to onnx or tflite for subsequent conversion to the intermediate format .har
I am also interested in this. Any solutions?
Try asking ChatGPT for these kind of questions. It surprisingly good at this.
Hey @Sergei_Zaharov @jelena.trajkovic ,
Here are the basic Hailo Dataflow Compiler (DFC) commands for converting an ONNX model to a HEF file:
1. Parse ONNX to Hailo Archive (HAR)
hailo parse --model <model.onnx> --output <model.har>
- This will work
2. Optimize the Model
hailo optimize --model <model.har> --calib-set <calibration_data> --output <optimized_model.har>
3. Compile to HEF
hailo compile --model <optimized_model.har> --output <model.hef>
4. Run Profiling (Optional)
hailo profile --hef <model.hef> --output <profile_report.html>
This represents the canonical pipeline for ONNX-to-HEF conversion. The calibration data in step 2 is crucial for proper quantization - make sure you’re using representative samples from your target use case!
Feel free to reach out if you encounter any issues or need additional parameters for specific model architectures.
Hi @Sergei_Zaharov
I took a look at the repo and the main issue is that there is no easy way to get an onnx file for the models. The repo does not have native support for onnx export. And since the model definition is not in pytorch or tensorflow, it requires quite a bit of effort to get the onnx file.