Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于Qwen2.5-coder模型的一点问题 #206

Open
jsuper opened this issue Dec 12, 2024 · 3 comments
Open

关于Qwen2.5-coder模型的一点问题 #206

jsuper opened this issue Dec 12, 2024 · 3 comments

Comments

@jsuper
Copy link

jsuper commented Dec 12, 2024

请问下,能用qwen2.5-coder-32b-instruct模型做对话问答和FIM代码智能补全的两种功能吗?还是说FIM智能补全模式情况下只能使用qwen2.5-coder-32b-base模型呢?

另外,我看官方只发布了qwen2.5-coder-32b-instruct的量化模型,是否有32b-base的官方量化模型呢?

@cyente
Copy link
Collaborator

cyente commented Dec 16, 2024

Hi, you can use Qwen2.5-Coder-32B-Instruct in either chat or FIM format.

However, Qwen2.5-Coder-32B-Instruct may perform slightly worse than the base model.

We have not released a quantized version of the 32B base model.

@jsuper
Copy link
Author

jsuper commented Dec 16, 2024

Hi, you can use Qwen2.5-Coder-32B-Instruct in either chat or FIM format.

However, Qwen2.5-Coder-32B-Instruct may perform slightly worse than the base model.

We have not released a quantized version of the 32B base model.

是的,instruct模型能够做一些补全,但是在FIM上补全上效果没有base模型好,如果不加<|repo_name|>这种说明,只是一个注释的话,会输出一些自然语言的回答结果

@cyente
Copy link
Collaborator

cyente commented Dec 16, 2024

may be you can try to use the instruct model while maintaining the same fim structure as the base model, rather than converting to ChatML format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants