濟(jì)南網(wǎng)絡(luò)策劃旅游企業(yè)seo官網(wǎng)分析報(bào)告
Pytorch單機(jī)多卡分布式訓(xùn)練
數(shù)據(jù)并行:
DP和DDP
這兩個(gè)都是pytorch下實(shí)現(xiàn)多GPU訓(xùn)練的庫(kù),DP是pytorch以前實(shí)現(xiàn)的庫(kù),現(xiàn)在官方更推薦使用DDP,即使是單機(jī)訓(xùn)練也比DP快。
-
DataParallel(DP)
- 只支持單進(jìn)程多線程,單一機(jī)器上進(jìn)行訓(xùn)練。
- 模型訓(xùn)練開始的時(shí)候,先把模型復(fù)制到四個(gè)GPU上面,然后把數(shù)據(jù)分配給四個(gè)GPU進(jìn)行前向傳播,前向傳播之后再匯總到卡0上面,然后在卡0上進(jìn)行反向傳播,參數(shù)更新,再將更新好的模型復(fù)制到其他幾張卡上。
-
DistributedDataParallel(DDP)
-
支持多線程多進(jìn)程,單一或者多個(gè)機(jī)器上進(jìn)行訓(xùn)練。通常DDP比DP要快。
-
先把模型載入到四張卡上,每個(gè)GPU上都分配一些小批量的數(shù)據(jù),再進(jìn)行前向傳播,反向傳播,計(jì)算完梯度之后再把所有卡上的梯度匯聚到卡0上面,卡0算完梯度的平均值之后廣播給所有的卡,所有的卡更新自己的模型,這樣傳輸?shù)臄?shù)據(jù)量會(huì)少很多。
-
DDP代碼寫法
-
初始化
import torch.distributed as dist import torch.utils.data.distributed# 進(jìn)行初始化,backend表示通信方式,可選擇的有nccl(英偉達(dá)的GPU2GPU的通信庫(kù),適用于具有英偉達(dá)GPU的分布式訓(xùn)練)、gloo(基于tcp/ip的后端,可在不同機(jī)器之間進(jìn)行通信,通常適用于不具備英偉達(dá)GPU的環(huán)境)、mpi(適用于支持mpi集群的環(huán)境) # init_method: 告知每個(gè)進(jìn)程如何發(fā)現(xiàn)彼此,默認(rèn)使用env:// dist.init_process_group(backend='nccl', init_method="env://")
-
設(shè)置device
device = torch.device(f'cuda:{args.local_rank}') # 設(shè)置device,local_rank表示當(dāng)前機(jī)器的進(jìn)程號(hào),該方式為每個(gè)顯卡一個(gè)進(jìn)程 torch.cuda.set_device(device) # 設(shè)定device
-
創(chuàng)建dataloader之前要加一個(gè)sampler
trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) data_set = torchvision.datasets.MNIST("./", train=True, transform=trans, target_transform=None, download=True) train_sampler = torch.utils.data.distributed.DistributedSampler(data_set) # 加一個(gè)sampler data_loader_train = torch.utils.data.DataLoader(dataset=data_set, batch_size=256, sampler=train_sampler)
-
torch.nn.parallel.DistributedDataParallel包裹模型(先to(device)再包裹模型)
net = torchvision.models.resnet101(num_classes=10) net.conv1 = torch.nn.Conv2d(1, 64, (7, 7), (2, 2), (3, 3), bias=False) net = net.to(device) net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[device], output_device=[device]) # 包裹模型
-
真正訓(xùn)練之前要set_epoch(),否則將不會(huì)shuffer數(shù)據(jù)
for epoch in range(10):train_sampler.set_epoch(epoch) # set_epochfor step, data in enumerate(data_loader_train):images, labels = dataimages, labels = images.to(device), labels.to(device)opt.zero_grad()outputs = net(images)loss = criterion(outputs, labels)loss.backward()opt.step()if step % 10 == 0:print("loss: {}".format(loss.item()))
-
模型保存
if args.local_rank == 0: # local_rank為0表示master進(jìn)程torch.save(net, "my_net.pth")
-
運(yùn)行
if __name__ == "__main__":parser = argparse.ArgumentParser()# local_rank參數(shù)是必須的,運(yùn)行的時(shí)候不必自己指定,DDP會(huì)自行提供parser.add_argument("--local_rank", type=int, default=0)args = parser.parse_args()main(args)
-
運(yùn)行命令
python -m torch.distributed.launch --nproc_per_node=2 多卡訓(xùn)練.py # --nproc_per_node=2表示當(dāng)前機(jī)器上有兩個(gè)GPU可以使用
完整代碼
import os
import argparse
import torch
import torchvision
import torch.distributed as dist
import torch.utils.data.distributedfrom torchvision import transforms
from torch.multiprocessing import Processdef main(args):# nccl: 后端基于NVIDIA的GPU-to-GPU通信庫(kù),適用于具有NVIDIA GPU的分布式訓(xùn)練# gloo: 后端是一個(gè)基于TCP/IP的后端,可在不同機(jī)器之間進(jìn)行通信,通常適用于不具備NVIDIA GPU的環(huán)境。# mpi: 后端使用MPI實(shí)現(xiàn),適用于具備MPI支持的集群環(huán)境。# init_method: 告知每個(gè)進(jìn)程如何發(fā)現(xiàn)彼此,如何使用通信后端初始化和驗(yàn)證進(jìn)程組。 默認(rèn)情況下,如果未指定 init_method,PyTorch 將使用環(huán)境變量初始化方法 (env://)。dist.init_process_group(backend='nccl', init_method="env://") # nccl比較推薦device = torch.device(f'cuda:{args.local_rank}')torch.cuda.set_device(device)trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])data_set = torchvision.datasets.MNIST("./", train=True, transform=trans, target_transform=None, download=True)train_sampler = torch.utils.data.distributed.DistributedSampler(data_set)data_loader_train = torch.utils.data.DataLoader(dataset=data_set, batch_size=256, sampler=train_sampler)net = torchvision.models.resnet101(num_classes=10)net.conv1 = torch.nn.Conv2d(1, 64, (7, 7), (2, 2), (3, 3), bias=False)net = net.to(device)net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[device], output_device=[device])criterion = torch.nn.CrossEntropyLoss()opt = torch.optim.Adam(params=net.parameters(), lr=0.001)for epoch in range(10):train_sampler.set_epoch(epoch)for step, data in enumerate(data_loader_train):images, labels = dataimages, labels = images.to(device), labels.to(device)opt.zero_grad()outputs = net(images)loss = criterion(outputs, labels)loss.backward()opt.step()if step % 10 == 0:print("loss: {}".format(loss.item()))if args.local_rank == 0:torch.save(net, "my_net.pth")if __name__ == "__main__":parser = argparse.ArgumentParser()# must parse the command-line argument: ``--local_rank=LOCAL_PROCESS_RANK``, which will be provided by DDPparser.add_argument("--local_rank", type=int, default=0)args = parser.parse_args()main(args)
參考:
https://zhuanlan.zhihu.com/p/594046884
https://zhuanlan.zhihu.com/p/358974461