-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama-3.2 11B Vision Support #9643
Comments
Most likely not. I believe its architrecture would be closer to Pixtral (which is unsupported) than to Llava. |
Currently any new vision+text models are not supported. If llamacpp want to exist in the future must to implement text+vision models as that will be more and more common. |
True. Is it possible to run it somehow on macOS not using llama.cpp? |
Not sure if MLX can run it...? |
I got an error that architecture is unsupported, which was coming |
First milestone could be ignoring non-text capabilities, and simply let people use text input/output to interact with multimodal models. Below output from current (llama.cpp b3827) convert_hf_to_gguf.py (so people looking for this error could find this issue):
|
FYI the repo owner has a related statement here:
So if you know anyone with strong skills or have influence over big tech and can motivate them please do! |
It may be helpful to draw a distinction between multimodal / vision support in the core llama.cpp library vs. multimodal / vision support in llama.cpp's server. Multimodal support was removed from the server in #5882, but it was not removed from the core library / command-line. I believe that ggerganov's comments re: looking for new developers to support vision models in the API is talking about the server -- not the core library. This is how wrappers (such as ollama) are still able to provide an API to serve multimodal models via llama.cpp as a back-end. Ollama provides the HTTP server, and llama.cpp still does the core processing (including multi-modal support). Long-term, I kinda' wonder if it isn't in llama.cpp's interests to stop supporting the HTTP server altogether, and instead farm that out to other wrapper projects (such as ollama), while we focus on enhancing the capabilities of the core API. |
Oh thank you immensely for that clarification. I wasn't even aware that llama.cpp had a server as it seems so redundant with the other efforts such as ollama so I agree with you. Thanks a lot! |
I may be misremembering, but I think ollama internally forwards its calls to a llama.cpp HTTP server. In any case, my personal opinion is that I would rather have a server be part of the project instead of having to rely on a third party. |
Sorry to take this issue off topic, but hopefully it's relevant to enough warrant continuing it here. Thank you for the correction -- it looks like you are right! Best I can tell, ollama maintains a fork of llama.cpp's server that branched off of b2356 (our last release version that supported multimodal). Since then, they have continued updating that branch of server.cpp to add new features, remove the web front-end that we include in ours, maintain multimodal support, etc. I'm going through the diff to try and parse through what's being brought over and what's not (I'm not fully clear on their update strategy / method), but it seems to be a full-on fork at this point. I haven't yet figured out how much their server maintains the spirit of the refactoring from #5882, or if merging their version of server.cpp into ours would be too much of a regress. If we're going to continue this discussion much further, perhaps opening a new issue to discuss sync'ing our version of server.cpp with ollama's would be useful? |
Honestly, I agree with that last point, maybe the server should be depreciated in favour of an API to allow others to wrap a server around it so that llamacpp can focus on its core competency whilst others handle the specifics of actually serving inference up. Having an actual server included as part of the program has been good, but is the program and the community well served by clinging to it now and into the future when it's very apparent that others are perfectly happy to address the issue of serving models as is happening with ollama? the same might even be said of the number of inference backends (CUDA, ROCm, various CPUs of both x86 and ARM flavours, Vulkan, and SYCL) as there is technically a great deal of duplication of effort in that regard. It'd be interesting to see what the technical feasibility would be of for example of Vulkan reaching feature parity with CUDA, ROCm, and CPU would be. of course, at the end of the day, I'm basically all talk on this matter as it'd be years before I'd have the competence to contribute :/ |
Open source projects aren't run like a company. There isn't a boss at the top directing people to work on specific things, people are choosing what to work on out of their own volition. Removing the server isn't going to result in more resources going towards other aspects of the project, it's going to result in the people who are currently choosing to contribute to and maintain the server being frustrated with their efforts being "wasted". The only aspect where I think there is a real zero sum game is code review since the vast majority of work is done by Georgi and slaren. |
yeah... kind of just accentuates just how frustrating it is to want to help, to contribute and yet knowing it'd be years before I could do so in a way that is useful :/ basically just a plain bummer. |
It works OK on a MacBook Pro using the example code on the hugging face page. I tried to get it to caption and keyword some pics. It doesn't seem to understand what a keyword is and produces, instead a good deal of prose. I preferred mlx-vlm + Llava 1.6, which was also a bit faster to run. |
Is it working right now in any way?
The text was updated successfully, but these errors were encountered: