We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory
在 qwen2_vl.yaml中,我写了:
do_sample: false max_new_tokens: 512
实际使用py-spy查看传入的参数的时候,显示do_sample=true, 没有max_new_tokens,而max_len是我的cut off len.实际上我是想限制生成长度。
根据transformers源代码 https://github.com/huggingface/transformers/blob/8bd2b1e8c23234cd607ca8d63f53c1edfea27462/src/transformers/generation/utils.py#L2967 _sample此时应该已经是false了
_sample
经过多次试验,这个参数只有在训练完成后的最后一次eval才会正确传递,训练中途的所有eval都不会
正确传入model.generate参数
No response
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
Reminder
System Info
llamafactory
version: 0.9.1.dev0Reproduction
在 qwen2_vl.yaml中,我写了:
实际使用py-spy查看传入的参数的时候,显示do_sample=true, 没有max_new_tokens,而max_len是我的cut off len.实际上我是想限制生成长度。
根据transformers源代码
https://github.com/huggingface/transformers/blob/8bd2b1e8c23234cd607ca8d63f53c1edfea27462/src/transformers/generation/utils.py#L2967
_sample
此时应该已经是false了经过多次试验,这个参数只有在训练完成后的最后一次eval才会正确传递,训练中途的所有eval都不会
Expected behavior
正确传入model.generate参数
Others
No response
The text was updated successfully, but these errors were encountered: