Hi, @An_ti11 ,
You can use model.predict_batch()
instead of mode.predict()
to effectively pipeline a sequence of frames, see detailed description here: Running AI Model Inference | DeGirum Docs
In a few words, you provide frame iterator as a method parameter, which, in turn, also returns iterator over results, which you can use in for loop: for result in model.predict_batch(["image1.jpg", "image2.jpg"]):
Your input iterator may yield various frame types:
- strings containing image filenames
- numpy arrays with image bitmaps
- PIL image objects
If you want to process camera stream, degirum_tools
package provides convenient wrappers like degirum_tools.predict_stream(model, video_source)
, see example here: hailo_examples/examples/004_rtsp.ipynb at main · DeGirum/hailo_examples