using two models on a single device

Hi, Could you please tell me if it’s possible to use two models on a single device simultaneously? At the moment, I can only run them sequentially.

Hi @An_ti11
You can use DeGirum PySDK for this use case: Running multiple models independently

If you have two small models that fit into a single Hailo device you can use the join API and allocate them together. However this should only be used when these models are the only models you will ever use in your application.

If you have more models or may want to add others in the future, it’s is better to allocate every model individually and use the HailoRT scheduler to switch models.

Thank you very much for your reply. How can I set the batch size to 2? For example, I would like to just add two identical images. I would be very grateful for your answer.“inference_result = model(frame)
results = inference_result.results
box1 = np.array([det[‘bbox’] for det in results], dtype=np.float32)
score1 = np.array([det[‘score’] for det in results], dtype=np.float32)” “model = dg.load_model(
model_name=‘yolov11n_5’,
inference_host_address=‘@local’,
zoo_url=‘/home/zoo_url/yolov11n_5’
)”

Hi, @An_ti11 ,

You can use model.predict_batch() instead of mode.predict() to effectively pipeline a sequence of frames, see detailed description here: Running AI Model Inference | DeGirum Docs

In a few words, you provide frame iterator as a method parameter, which, in turn, also returns iterator over results, which you can use in for loop: for result in model.predict_batch(["image1.jpg", "image2.jpg"]):

Your input iterator may yield various frame types:

  • strings containing image filenames
  • numpy arrays with image bitmaps
  • PIL image objects

If you want to process camera stream, degirum_tools package provides convenient wrappers like degirum_tools.predict_stream(model, video_source), see example here: hailo_examples/examples/004_rtsp.ipynb at main · DeGirum/hailo_examples