Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding an assistant_model Argument for Speculative Decoding #288

Open
ilyasoulk opened this issue Nov 23, 2024 · 0 comments
Open

Adding an assistant_model Argument for Speculative Decoding #288

ilyasoulk opened this issue Nov 23, 2024 · 0 comments

Comments

@ilyasoulk
Copy link

Hello,

I am working on distilling language models for coding tasks and want to benchmark my distilled model when used as the assistant model for speculative decoding. Currently, there doesn’t seem to be an option to specify a custom assistant model.

I think we could add an assistant_model argument, allowing users to provide the path to their assistant model and passing it to the gen_kwargs. I could make the change for my own local tests, but I was wondering if this feature would be useful enough to integrate into the project.

@ilyasoulk ilyasoulk changed the title Add assistant_model Argument for Speculative Decoding Adding an assistant_model Argument for Speculative Decoding Nov 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant