-
Notifications
You must be signed in to change notification settings - Fork 4.5k
hiyouga LLaMA-Factory Discussions
Pinned Discussions
Sort by:
Latest activity
Categories, most helpful, and community links
Categories
Community links
Discussions
-
You must be logged in to vote 💡 -
You must be logged in to vote 🙏 chatglm4用lora训练后,推理报错
solvedThis problem has been already solved -
You must be logged in to vote 🙏 Please Give me sample for Tools stucture
solvedThis problem has been already solved -
You must be logged in to vote 🙏 galore训练,输出的grad_norm始终为0,正常吗
solvedThis problem has been already solved -
You must be logged in to vote 🙏 微调llama3-8b的时候,eval_loss不断上升,考虑到了使用多个数据集混合,但还是没有效果,应该怎么解决?
pendingThis problem is yet to be addressed -
You must be logged in to vote 💡 -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 -
You must be logged in to vote 💬 -
You must be logged in to vote 🙏 Can we fine-tune the text-embedding model using LLaMA Factory?
wontfixThis will not be worked on -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 How to do ORPO with DPO template?
solvedThis problem has been already solved -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 请教一下,预训练的时候,数据集会按照指定的template拼接吗
solvedThis problem has been already solved -
You must be logged in to vote 💬 -
You must be logged in to vote 🙏 Need lora_magnitude_vector in lora_target for DoRA?
solvedThis problem has been already solved -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 Issue High Loss
solvedThis problem has been already solved -
You must be logged in to vote 💬 -
You must be logged in to vote 🙏 -
You must be logged in to vote 🙏 After the update, isn’t the fine-tuning of chatglm4-9b-chat supported? Why haven’t we seen this model yet?
solvedThis problem has been already solved -
You must be logged in to vote 🙏 Fine-tuning (sft) a model after Pretraining (pt)
solvedThis problem has been already solved -
You must be logged in to vote 🙏