Skip to content

Practice fine-tuning a Pretrained Transformers model from Hugging Face using various approaches and techniques from the abstraction to details

Notifications You must be signed in to change notification settings

IsmaelMousa/playing-with-finetuning

Repository files navigation

Fine-Tuning a Pre-trained Model

Overview

This repository serves as a practice on how to fine-tune a pre-trained transformers model from the Hugging Face, in various techniques.

Points

  • Practice on how to prepare the foundations of the training environment, such as the tokenizer, model, dataset, and hyperparameters.
  • Practice on how to fine-tune a model by using the Trainer high-level API.
  • Practice on how to evaluate the performance of the model after finishing the training process, by using the evaluate library.
  • Practice on how to fine-tune a model from the low level training & evaluation loop.
  • Practice on how to fine-tune a model & using the Accelerator to enable distributed training on multiple GPUs or TPUs.

Usage

  1. Clone this repository to your local machine:
git clone git@github.com:IsmaelMousa/playing-with-finetuning.git
  1. Navigate to the playing-with-finetuning directory:
cd playing-with-finetuning
  1. Setup virtual environment:
python3 -m venv .venv
  1. Activate the virtual environment:
source .venv/bin/activate
  1. Install the required dependencies:
pip install -r requirements.txt

Important

To start training form train.py file directly:

  1. This command will prompt you to answer a few questions and dump your answers in the configuration file
accelerate config
  1. Launch the training
accelerate launch train.py

About

Practice fine-tuning a Pretrained Transformers model from Hugging Face using various approaches and techniques from the abstraction to details

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published