According to infer_model.hpp, “run” function seems to launch not asynchronous inference operation, but synchronous inference operation.
Then, I would like to build a sample using the function.
Is there any example for that? or, any source code?
TIA
According to infer_model.hpp, “run” function seems to launch not asynchronous inference operation, but synchronous inference operation.
Then, I would like to build a sample using the function.
Is there any example for that? or, any source code?
TIA
Hey @Koch ,
Please check out the c++ examples in here : GitHub - hailo-ai/Hailo-Application-Code-Examples
They included everything you need to run c++!
Thank you for your replay. But, …
It seems that a ‘run_async’ function is used in cpp files shown below, but no ‘run’ function is used.
Is there any cpp file using a ‘run’ function, in Hailo-Application-Code-Examples?
./object_detection/utils/async_inference.cpp: wait_and_run_async(output_data_and_infos, output_guards, input_guards, callback);
./object_detection/utils/async_inference.cpp:void AsyncModelInfer::wait_and_run_async(
./object_detection/utils/async_inference.cpp: auto job = configured_infer_model.run_async(
./zero_shot_classification/clip_example.cpp: auto job = configured_infer_model->run_async(bindings.value(),
./zero_shot_classification/clip_example.cpp: auto job = configured_infer_model->run_async(bindings.value());
Hi @Koch, run and run_async are very similar. The only difference is that run waits for inference results from the previous to be ready before writing to the device again. This is less efficient, so our recommendation is to use run_async
Thank you for your reply.
I see that a function “run“ actually performs in asynchronous mode.