You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ torchrun --standalone --nproc_per_node=2 train.py( for running the get file , follow this with config/train_gpt2.py, for training gpt2) run the file with this command , adjust the "nproc_per_node" parameter , for the number of gpus. the code uses ddp to split the training based on the number of gpus . assuming number of nodes is 1
Hie, I am new to transformers and cuda, How can I change to use two GPUs as I have two T4 GPUs and I want to split the training across the two GPUs?
The text was updated successfully, but these errors were encountered: