"Attention Is All You Need" implemented from scratch in PyTorch with support for distributed training, in both FSDP and DDP.
Torch.distributed is all you need :)
I recommend using Paperspace to run this model on multiple GPUs. It's pretty easy to get started with and also doesn't break the bank (< 5 bucks)
Make sure to create everything in the same region. I used East Coast (NY2).
- Create 1x Private network. Assign both computers to the private network when creating the machines.
- Create 2x nodes of
P4000x2(multi-GPU) withML-in-a-Boxas operating system - Create 1 Network drive (250 GB)
Login on each machine and perform the following operations:
sudo apt-get updatesudo apt-get install net-tools- If you get an error about
seahorsewhile installingnet-tools, do the following:- sudo rm /var/lib/dpkg/info/seahorse.list
- sudo apt-get install seahorse --reinstall
- Get each machine's private IP address using
ifconfig - Add IP and hostname mapping of all the slave nodes on
/etc/hostsfile of the master node - Mount the network drive
sudo apt-get install smbclientsudo apt-get install cifs-utilssudo mkdir /mnt/training-data- Replace the following values on the command below:
NETWORD_DRIVE_IPwith the IP address of the network driveNETWORK_SHARE_NAMEwith the name of the network shareDRIVE_USERNAMEwith the username of the network drive
sudo mount -t cifs //NETWORD_DRIVE_IP/NETWORK_SHARE_NAME /mnt/training-data -o uid=1000,gid=1000,rw,user,username=NETWORK_DRIVE_USERNAME- Type the drive's password when prompted
git clone https://github.com/codingwithsurya/distributed-transformercd distributed-transformerpip install -r requirements.txt- Login on Weights & Biases. This is a platform that makes it easy to track, visualize, and reproduce our model runs.
wandb login- Copy the API key from the browser and paste it on the terminal
- Run the training command from below
Run the following command on any machine. Make sure to not run it on both, otherwise they will end up overwriting each other's checkpoints.
torchrun --nproc_per_node=2 --nnodes=1 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:48123 train.py --batch_size 8 --model_folder "/mnt/training-data/weights"
FSDP (Fully Sharded Data Parallel) and DDP (Distributed Data Parallel) are both methods for parallelizing training of models like Transformers across multiple GPUs.
- FSDP shards the model weights and optimizer states across GPUs, reducing memory usage, which allows training larger models.
- DDP, on the other hand, replicates the model across GPUs and averages gradients during training, leading to higher memory consumption but simpler synchronization.
FSDP is more suited for very large models, while DDP is often used for standard-sized models where memory isn't a limiting factor. I've also included gradient accumulation in my DDP implementation to effectively manage larger batch sizes and reduce synchronization overhead, which isn't as necessary in FSDP due to its efficient memory usage.
Run the following command on each machine (replace IP_ADDR_MASTER_NODE with the IP address of the master node). You have two options under the train/ directory: train_ddp.py and train_fsdp.py:
-
For train_ddp.py:
torchrun --nproc_per_node=2 --nnodes=1 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:48123 train/train_ddp.py --batch_size 8 --model_folder "/mnt/training-data/weights" -
For train_fsdp.py:
torchrun --nproc_per_node=2 --nnodes=1 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:48123 train/train_fsdp.py --batch_size 8 --model_folder "/mnt/training-data/weights"
Login to Weights & Biases to monitor the training progress: https://app.wandb.ai/
Credit: hkproj/pytorch-transformer