Does the LLaVA finetune option support finetuning only the LLaMA model? #4669
Replies: 1 comment 1 reply
-
You can use this script to finetune the llama model only with lora (~20GB) https://github.com/hiyouga/LLaMA-Factory/blob/main/examples/train_lora/llava1_5_lora_sft.yaml |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Some users online claim you need a lot of hardware to finetune LLaVA, like 8xA100 according to a reddit comment. I am wondering if the LLaVA finetune setting in LLaMA-Factory only finetunes the LLaMA model, taking up the same amount of VRAM as a regular LLaMA finetune. This would mean I could finetune it on my consumer 24GB VRAM graphics card.
(I assume it can after looking briefly at https://github.com/hiyouga/LLaMA-Factory/pull/3454/files)
Personally I probably do not need to finetune the projection matrix for my application.
Beta Was this translation helpful? Give feedback.
All reactions