- Use enivornment.yml or requirements.txt.
- Download pretrained model and tokenizer (GPT-2) in a local folder. Store it in a folder and load it from this location.
- Split data into train/val/test.
- Input to model: " + text + + summary + ". Truncate lengths of text and summary to fit in the design. Total sequence length can be 768 or 1024.
- Create Datalaoders of train and val.
- Add special tokens to GPT-2 tokenizer.
- Resize model embeddings for new tokenizer length.
- Fine-tuning model by passing train data and evaluating it on val data during training.
- Store the tokenizer and fine-tuned model.
- Generate summaries for test set which is not used during fine tune.
- Simple topk and beam search are used for the generation.
- Compute Rouge scores for test outputs and store it.
- Add argparser (currently all hyperparams are stored in config.py)
- Batch processing (currently working on batch_size = 1)
- This code contains some parts from the following code/post: https://blog.paperspace.com/generating-text-summaries-gpt-2/