Replies: 2 comments
-
Any ideas? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I am having the same problem. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
If am I trying to convert a Lora Adapter based on LLama-3.1-8b to gguf, I am getting the error:
ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor
ERROR:lora-to-gguf:Embeddings is present in the adapter. This can be due to new tokens added during fine tuning
ERROR:lora-to-gguf:Hint: if you are using TRL, make sure not to call setup_chat_format()
We are using indeed in pre-training phase new tokens, lm_head.
Any ideas how to solve the problem?
Thank you
llama.cpp/convert_lora_to_gguf.py
Line 350 in c421ac0
Beta Was this translation helpful? Give feedback.
All reactions