asp.net網(wǎng)站打不開(kāi)html頁(yè)面月嫂免費(fèi)政府培訓(xùn)中心
000動(dòng)手從0實(shí)現(xiàn)線性回歸
0. 背景介紹
我們構(gòu)造一個(gè)簡(jiǎn)單的人工訓(xùn)練數(shù)據(jù)集,它可以使我們能夠直觀比較學(xué)到的參數(shù)和真實(shí)的模型參數(shù)的區(qū)別。
設(shè)訓(xùn)練數(shù)據(jù)集樣本數(shù)為1000,輸入個(gè)數(shù)(特征數(shù))為2。給定隨機(jī)生成的批量樣本特征 X∈R1000×2
X∈R 1000×2 ,我們使用線性回歸模型真實(shí)權(quán)重 w=[2,?3.4]? 和偏差 b=4.2以及一個(gè)隨機(jī)噪聲項(xiàng) ?? 來(lái)生成標(biāo)簽
# 需要導(dǎo)入的包
import numpy as np
import torch
import random
from d2l import torch as d2l
from IPython import display
from matplotlib import pyplot as plt
1. 生成數(shù)據(jù)集合(待擬合)
使用python生成待擬合的數(shù)據(jù)
num_input = 2
num_example = 1000
w_true = [2,-3.4]
b_true = 4.2
features = torch.randn(num_example,num_input)
print('features.shape = '+ str(features.shape) )
labels = w_true[0] * features[:,0] + w_true[1] * features[:,1] + b_true
labels += torch.tensor(np.random.normal(0,0.01 , size = labels.size() ),dtype = torch.float32)
print(features[0],labels[0])
2.數(shù)據(jù)的分批量處理
def data_iter(batch_size, features, labels):num_example = len(labels)indices = list(range(num_example))random.shuffle(indices)for i in range(0, num_example, batch_size):j = torch.tensor( indices[i:min(i+ batch_size,num_example)])yield features.index_select(0,j) ,labels.index_select(0,j)
3. 模型構(gòu)建及訓(xùn)練
3.1 定義模型:
def linreg(X, w, b):return torch.mm(X,w)+b
3.2 定義損失函數(shù)
def square_loss(y, y_hat):return (y_hat - y.view(y_hat.size()))**2/2
3.3 定義優(yōu)化算法
def sgd(params , lr ,batch_size):for param in params:param.data -= lr * param.grad / batch_size
3.4 模型訓(xùn)練
# 設(shè)置超參數(shù)
lr = 0.03
num_epochs =5
net = linreg
loss = square_loss
batch_size = 10
for epoch in range(num_epochs):for X,y in data_iter(batch_size= batch_size,features=features,labels= labels):l = loss(net(X,w,b),y).sum()l.backward()sgd([w,b],lr,batch_size=batch_size)#梯度清零避免梯度累加w.grad.data.zero_()b.grad.data.zero_()train_l = loss(net(features,w,b),labels)print('epoch %d, loss %f' %(epoch +1 ,train_l.mean().item()))
epoch 1, loss 0.032550
epoch 2, loss 0.000133
epoch 3, loss 0.000053
epoch 4, loss 0.000053
epoch 5, loss 0.000053
基于pytorch的線性模型的實(shí)現(xiàn)
- 相關(guān)數(shù)據(jù)和初始化與上面構(gòu)建相同
- 定義模型
import torch
from torch import nn
class LinearNet(nn.Module):def __init__(self, n_feature):# 調(diào)用父類的初始化super(LinearNet,self).__init__()# Linear(輸入特征數(shù),輸出特征的數(shù)量,是否含有偏置項(xiàng))self.linera = nn.Linear(n_feature,1)def forward(self,x):y = self.linera(x)return y
#打印模型的結(jié)構(gòu):
net = LinearNet(num_input)
print(net)
# LinearNet( (linera): Linear(in_features=2, out_features=1, bias=True)
)
- 初始化模型的參數(shù)
from torch.nn import init
init.normal_(net.linera.weight,mean=0,std= 0.1)
init.constant_(net.linera.bias ,val=0)
- 定義損失函數(shù)
loss = nn.MSELoss()
5.定義優(yōu)化算法
import torch.optim as optim
optimizer = optim.SGD(net.parameters(),lr = 0.03)
print(optimizer)
- 訓(xùn)練模型:
num_epochs = 3
for epoch in range(1,num_epochs+1):for X,y in data_iter(batch_size= batch_size,features=features,labels= labels):output= net(X)l = loss(output,y.view(-1,1))optimizer.zero_grad()l.backward()optimizer.step()print('epoch %d ,loss: %f' %(epoch,l.item()) )
epoch 1 ,loss: 0.000159
epoch 2 ,loss: 0.000089
epoch 3 ,loss: 0.000066