-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: Bring back multimodal support #8010
Comments
Can someone advise me on what is the latest (working) commit before multimodal was removed? |
It is rather annoying that multimodal support was removed for the server and has not been re-implemented for such a long time now (4 months?). Multimodal LLMs and interleaved image and text models are growing in capability recently, and not being able to run models that used to work before is unfortunate. Seemingly, the only way to restore this functionality is to downgrade to a version that loses support for most new models and improvements. I am not trying to demand multimodal/llava support to return, but show that this feature on the server is missed. |
Hello, is there still no multimodal support in llama-server ? According to ReadMe in LLaMA.cpp HTTP Server it should be supported? How to use it with OpenAI API format? |
Have there been any updates on this? |
So far, it appears that there hasn't been any updates. This really stinks because there were updates to llava recently to support new models. |
So, this functionality seems to be unavailable for months and there is no hope to get it running? With all the amazing new models we could work with, such as MiniCPM, even Pixtral, etc. Can someone point us at a working server software that allows working with the newer multimodal models ? We just need something like llama-server that should run these multimodal GGUFs. Perhaps one of the llama.cpp files allows to run a server? It's also important to have a standard (OpenAI) API to support standard interactions.. It's so frustrating to wait months and months for such an important feature with no one even bothering to reply! |
Not much has changes since the issue was created. We need contributions to improve the existing vision code and people to maintain it. There is interest to reintroduce full multimodal support, but there are other things with higher priority that are currently worked upon by the core maintainers of the project. |
Just to remind: Currently, llama-cpp-python has a server implementation that supports vision models (with OAI compat API). You can use it as an alternative. Of course it's much better to bring vision support into llama.cpp itself (instead of staying as |
@ggerganov, Meta released Llama-3.2 with multimodal capabilities. Does this affect the priority for core maintainers? I hope this question doesn’t come across as entitled... |
@chigkim My PoV is that adding multimodal support is a great opportunity for new people with good software architecture skills to get involved in the project. The general low to mid level patterns and details needed for the implementation are already available in the codebase - from model conversion, to data loading, backend usage and inference. It would take some high-level understanding of the project architecture in order to implement support for the vision models and extend the API in the correct way. We really need more people with this sort of skillset, so at this point I feel it is better to wait and see if somebody will show up and take the opportunity to help out with the project long-term. Otherwise, I'm afraid we won't be able to sustain the quality of the project. |
Great! Good opportunities, from a developer perspective everyone loves to dive into the code. I would love to help but don't know where to start, is there a list of requirements for the implementation or just make something work for now? What would the finished implementation look like? |
Correct me if I'm wrong, but actual multimodal opensource models are essentially just like usual llm plus accepting images as input.
|
IMO the
|
It's hard to make a list of requirements - I personally don't have the expertise and experience needed to decide what is the best way to integrate multimodal. It mainly depends on the ways that the functionality is used - what is the input and output. The core implementation should be 99% the same as every transformer based model.
Likely,
That's my understanding as well. In a similar way, Whisper is an LLM that instead of text tokens, accepts raw audio, that is encoded and passed to the decoder.
Yes, I agree.
Yes, something along these lines, though I don't really have a good picture. Maybe even consider to reuse |
CLIP is quite different from whisper because it doesn't use cross attention. Instead, the vision model outputs embeddings that can be taken as input for language model. It also depends on the chat template to know where to put the embedding. A common pattern that I observe looks like this:
So, My current solution on #9687 is to firstly call Not sure if this is the best way to do though, as I'm still new to vision models. Feedbacks are welcomed on this subject. |
Multimodal has been removed since #5882
Depends on the refactoring of
llava
, we will be able to bring back the support: #6027This issue is created mostly for tracking purpose. If someone want to take this task, feel free to comment below.
Currently, there is not yet any plan for this task.
The text was updated successfully, but these errors were encountered: