This repository serves as a practice on how to fine-tune a pre-trained transformers model from the Hugging Face, in various techniques.
- Practice on how to prepare the foundations of the training environment, such as the tokenizer, model, dataset, and hyperparameters.
- Practice on how to fine-tune a model by using the
Trainer
high-level API. - Practice on how to evaluate the performance of the model after finishing the training process, by using the evaluate library.
- Practice on how to fine-tune a model from the low level training & evaluation loop.
- Practice on how to fine-tune a model & using the
Accelerator
to enable distributed training on multiple GPUs or TPUs.
- Clone this repository to your local machine:
git clone git@github.com:IsmaelMousa/playing-with-finetuning.git
- Navigate to the playing-with-finetuning directory:
cd playing-with-finetuning
- Setup virtual environment:
python3 -m venv .venv
- Activate the virtual environment:
source .venv/bin/activate
- Install the required dependencies:
pip install -r requirements.txt
Important
To start training form train.py
file directly:
- This command will prompt you to answer a few questions and dump your answers in the configuration file
accelerate config
- Launch the training
accelerate launch train.py