-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Add Paligemma support #7875
Comments
Yup! Work in progress at #7553. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Hey there, I hope all is well, any luck getting the model to run? I'm quite curious, I managed to use Clip.cpp quantization to quantize phi-3 vision's projector and Llama.cpp quantization to quantize the language model component, the result was a pretty useful VLM with a total size of 1.5GB. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Prerequisites
Feature Description
Its a really solid model and has a lot of requests in the discussions
Motivation
Pulls way above its weight and has really good ocr capabilites
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: