Replies: 2 comments 1 reply
-
We are literally unable to merge LoRA weights into a GPTQ quantized model because it needs calibration datasets to quantize the model. casper-hansen/AutoAWQ#85 (comment) |
Beta Was this translation helpful? Give feedback.
-
Your answer doesn't make sense. Cause we can gather calibration dataset to quantize the model anyway (or they might be a quantized version already available with either PTQ or AWT). There should be another reason on how the quantized version is not compatible. |
Beta Was this translation helpful? Give feedback.
-
Hello, I noticed that merging LoRA weights into a quantized model is not supported. How would one go about adding this support? I can help if it's not too difficult to do. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions