Select Qwen 2.5 VL in OpenWebUI

Hello, I apologize as I am rather new to this. I have successfully got the Raspberry PI AI Hat + 2 operating, and was able to download LLM’s and interact with them through OpenWeb UI. However, I wasnt able to see the Qwen2.5 VL model that I downloaded while interacting with the vlm chat application. Is there a way to upload a single image via a web app interface similar to what is demonstrated here? https://www.youtube.com/watch?v=DkGeRaFxRSE

Thank you in advance.

Hi @user574 ,

Are you trying to run our vlm chat app or hailo-ollama + OpenWebUI?

Thanks,

I’ve run your VLM chat app successfully as well as Hailo-ollama + WebUI , I was looking to be able to use the Hailo-0llama + WebUI as a VLM chat, so I can upload single images from photos taken with my cell phone.

Hi @user574 ,
VLM is currently not supported in hailo-ollama.
Thanks,

Ok thank you. Is there an alternative so that a single image can be uploaded instead of having to take a picture through the VLM chat app? Looking for something similar to what was done in the demonstration video.

Hi @user574 - I would say our VLM chat app - it’s possible to integrate UI similar to the video and use some components from the chat app as the backend.

Hi !

I was looking for the same feature : using VL models with hailo-ollama (pluged or not with webUI). I use some service that require to talk to an ollama-like service to analyse pictures.

Do you have plan to integrate VL models into the hailo-ollama API ?

Best

Hi @PSyL,

“Do you have plan to integrate VL models into the hailo-ollama API”: For time being - not.

Thanks,