-
In NLP, let's say using the BERT model, when the authors pretrained the model, the task was to do well in masked level modelling and next sentence prediction. Then, the task for finetuning will depend on the dataset and goal e.g. text classification, entity extraction, etc. Does Chronos have different task(s) during the pretraining and finetuning stage? From reading the paper and code, in both stages, the task is the same i.e. to generate a sequence of prediction that is as close as possible to the given label. Am I understanding this right? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
@hsm207 yes, you're right: the task we consider for pre-training and fine-tuning is the same (next token prediction). |
Beta Was this translation helpful? Give feedback.
@hsm207 yes, you're right: the task we consider for pre-training and fine-tuning is the same (next token prediction).