You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on distilling language models for coding tasks and want to benchmark my distilled model when used as the assistant model for speculative decoding. Currently, there doesn’t seem to be an option to specify a custom assistant model.
I think we could add an assistant_model argument, allowing users to provide the path to their assistant model and passing it to the gen_kwargs. I could make the change for my own local tests, but I was wondering if this feature would be useful enough to integrate into the project.
The text was updated successfully, but these errors were encountered:
ilyasoulk
changed the title
Add assistant_model Argument for Speculative Decoding
Adding an assistant_model Argument for Speculative Decoding
Nov 23, 2024
Hello,
I am working on distilling language models for coding tasks and want to benchmark my distilled model when used as the assistant model for speculative decoding. Currently, there doesn’t seem to be an option to specify a custom assistant model.
I think we could add an assistant_model argument, allowing users to provide the path to their assistant model and passing it to the gen_kwargs. I could make the change for my own local tests, but I was wondering if this feature would be useful enough to integrate into the project.
The text was updated successfully, but these errors were encountered: