桂陽(yáng)網(wǎng)站建設(shè)津seo快速排名
優(yōu)化器
- 官網(wǎng)
- 如何構(gòu)造一個(gè)優(yōu)化器
- 優(yōu)化器的step方法
- code
- running log
- 出現(xiàn)下面問(wèn)題如何做反向優(yōu)化?
官網(wǎng)
https://pytorch.org/docs/stable/optim.html
提問(wèn):優(yōu)化器是什么 要優(yōu)化什么 優(yōu)化能干什么 優(yōu)化是為了解決什么問(wèn)題
優(yōu)化模型參數(shù)
如何構(gòu)造一個(gè)優(yōu)化器
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) # momentum SGD優(yōu)化算法用到的參數(shù)
optimizer = optim.Adam([var1, var2], lr=0.0001)
- 選擇一個(gè)優(yōu)化器算法,如上 SGD 或者 Adam
- 第一個(gè)參數(shù) 需要傳入模型參數(shù)
- 第二個(gè)及后面的參數(shù)是優(yōu)化器算法特定需要的,lr 學(xué)習(xí)率基本每個(gè)優(yōu)化器算法都會(huì)用到
優(yōu)化器的step方法
會(huì)利用模型的梯度,根據(jù)梯度每一輪更新參數(shù)
optimizer.zero_grad() # 必須做 把上一輪計(jì)算的梯度清零,否則模型會(huì)有問(wèn)題
for input, target in dataset:optimizer.zero_grad() # 必須做 把上一輪計(jì)算的梯度清零,否則模型會(huì)有問(wèn)題output = model(input)loss = loss_fn(output, target)loss.backward()optimizer.step()
or 把模型梯度包裝成方法再調(diào)用
for input, target in dataset:def closure():optimizer.zero_grad()output = model(input)loss = loss_fn(output, target)loss.backward()return lossoptimizer.step(closure)
code
import torch
import torchvision
from torch import nn, optim
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWritertest_set = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(),download=True)dataloader = DataLoader(test_set, batch_size=1)class MySeq(nn.Module):def __init__(self):super(MySeq, self).__init__()self.model1 = Sequential(Conv2d(3, 32, kernel_size=5, stride=1, padding=2),MaxPool2d(2),Conv2d(32, 32, kernel_size=5, stride=1, padding=2),MaxPool2d(2),Conv2d(32, 64, kernel_size=5, stride=1, padding=2),MaxPool2d(2),Flatten(),Linear(1024, 64),Linear(64, 10))def forward(self, x):x = self.model1(x)return x# 定義loss
loss = nn.CrossEntropyLoss()
# 搭建網(wǎng)絡(luò)
myseq = MySeq()
print(myseq)
# 定義優(yōu)化器
optmizer = optim.SGD(myseq.parameters(), lr=0.001, momentum=0.9)
for epoch in range(20):running_loss = 0.0for data in dataloader:imgs, targets = data# print(imgs.shape)output = myseq(imgs)optmizer.zero_grad() # 每輪訓(xùn)練將梯度初始化為0 上一次的梯度對(duì)本輪參數(shù)優(yōu)化沒(méi)有用result_loss = loss(output, targets)result_loss.backward() # 優(yōu)化器需要每個(gè)參數(shù)的梯度, 所以要在backward() 之后執(zhí)行optmizer.step() # 根據(jù)梯度對(duì)每個(gè)參數(shù)進(jìn)行調(diào)優(yōu)# print(result_loss)# print(result_loss.grad)# print("ok")running_loss += result_lossprint(running_loss)
running log
loss由小變大最后到nan的解決辦法:
- 降低學(xué)習(xí)率
- 使用正則化技術(shù)
- 增加訓(xùn)練數(shù)據(jù)
- 檢查網(wǎng)絡(luò)架構(gòu)和激活函數(shù)
出現(xiàn)下面問(wèn)題如何做反向優(yōu)化?
Files already downloaded and verified
MySeq((model1): Sequential((0): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(4): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(6): Flatten(start_dim=1, end_dim=-1)(7): Linear(in_features=1024, out_features=64, bias=True)(8): Linear(in_features=64, out_features=10, bias=True))
)
tensor(18622.4551, grad_fn=<AddBackward0>)
tensor(16121.4092, grad_fn=<AddBackward0>)
tensor(15442.6416, grad_fn=<AddBackward0>)
tensor(16387.4531, grad_fn=<AddBackward0>)
tensor(18351.6152, grad_fn=<AddBackward0>)
tensor(20915.9785, grad_fn=<AddBackward0>)
tensor(23081.5254, grad_fn=<AddBackward0>)
tensor(24841.8359, grad_fn=<AddBackward0>)
tensor(25401.1602, grad_fn=<AddBackward0>)
tensor(26187.4961, grad_fn=<AddBackward0>)
tensor(28283.8633, grad_fn=<AddBackward0>)
tensor(30156.9316, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)
tensor(nan, grad_fn=<AddBackward0>)