中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

網(wǎng)頁設(shè)計(jì) 做網(wǎng)站的代碼制作網(wǎng)站大概多少錢

網(wǎng)頁設(shè)計(jì) 做網(wǎng)站的代碼,制作網(wǎng)站大概多少錢,wordpress主題truemag,廉政網(wǎng)站 建設(shè)需求任務(wù) 3.3 課程實(shí)驗(yàn)要求 (1)手動(dòng)實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決上述回歸、二分類、多分類任務(wù) l 從訓(xùn)練時(shí)間、預(yù)測(cè)精度、Loss變化等角度分析實(shí)驗(yàn)結(jié)果(最好使用圖表展示) (2)利用torch.nn實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決上述回歸…

任務(wù) 3.3 課程實(shí)驗(yàn)要求 (1)手動(dòng)實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決上述回歸、二分類、多分類任務(wù)
l 從訓(xùn)練時(shí)間、預(yù)測(cè)精度、Loss變化等角度分析實(shí)驗(yàn)結(jié)果(最好使用圖表展示)
(2)利用torch.nn實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決上述回歸、二分類、多分類任務(wù) l 從訓(xùn)練時(shí)間、預(yù)測(cè)精度、Loss變化等角度分析實(shí)驗(yàn)結(jié)果(最好使用圖表展示)
(3)在多分類任務(wù)中使用至少三種不同的激活函數(shù) l 使用不同的激活函數(shù),進(jìn)行對(duì)比實(shí)驗(yàn)并分析實(shí)驗(yàn)結(jié)果
(4)對(duì)多分類任務(wù)中的模型評(píng)估隱藏層層數(shù)和隱藏單元個(gè)數(shù)對(duì)實(shí)驗(yàn)結(jié)果的影響
l 使用不同的隱藏層層數(shù)和隱藏單元個(gè)數(shù),進(jìn)行對(duì)比實(shí)驗(yàn)并分析實(shí)驗(yàn)結(jié)果

一、手動(dòng)生成實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決回歸、二分類、多分類任務(wù)

1.1 任務(wù)內(nèi)容

分析實(shí)驗(yàn)結(jié)果并繪制訓(xùn)練集和測(cè)試集的loss曲線

1.2 任務(wù)思路及代碼

# 導(dǎo)入模塊
import torch
import torch.nn as nn
import numpy as np
import torchvision
from torchvision import transforms
import time
# 定義繪圖函數(shù)
import matplotlib.pyplot as plt
def draw_loss(train_loss, test_loss):x = np.linspace(0, len(train_loss), len(train_loss))plt.plot(x, train_loss, label="Train Loss", linewidth=1.5)plt.plot(x, test_loss, label="Test Loss", linewidth=1.5)plt.xlabel("Epoch")plt.ylabel("Loss")plt.legend()plt.show()
# 定義評(píng)價(jià)函數(shù)
def evaluate_accuracy(data_iter, model, loss_func):acc_sum, test_l_sum, n, c = 0.0, 0.0, 0, 0for X, y in data_iter:result = model.forward(X)acc_sum += (result.argmax(dim=1)==y).float().sum().item()test_l_sum += loss_func(result, y).item()n += y.shape[0]c += 1return acc_sum/n, test_l_sum/c
# 回歸任務(wù)
n_train = 7000
n_test = 3000
num_inputs = 500
true_w, true_b = torch.ones(num_inputs, 1)*0.0056, 0.028# 生成數(shù)據(jù)集
features = torch.randn((n_train + n_test, num_inputs))
labels = torch.matmul(features, true_w) + true_b
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
# 數(shù)據(jù)劃分
train_features, test_features = features[:n_train, :], features[n_train:, :]
train_labels, test_labels = labels[:n_train], labels[n_train:]
batch_size1 = 128traindataset1 = torch.utils.data.TensorDataset(train_features, train_labels)
testdataset1 = torch.utils.data.TensorDataset(test_features, test_labels)traindataloader1 = torch.utils.data.DataLoader(dataset=traindataset1, batch_size=batch_size1, shuffle=True)
testdataloader1 = torch.utils.data.DataLoader(dataset=testdataset1, batch_size=batch_size1, shuffle=False)
# 定義損失函數(shù)
def my_cross_entropy_loss(y_hat, labels):def log_softmax(y_hat):max_v = torch.max(y_hat, dim=1).values.unsqueeze(dim=1)return y_hat - max_v - torch.log(torch.exp(y_hat-max_v).sum(dim=1).unsqueeze(dim=1))return (-log_softmax(y_hat))[range(len(y_hat)), labels].mean()# 定義優(yōu)化算法
def SGD(params, lr):for param in params:param.data -= lr*param.graddef mse(pred, true):ans = torch.sum((true-pred)**2)/len(pred)return ans
class Net1():def __init__(self):# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)num_inputs, num_hiddens, num_outputs = 500, 256, 1w_1 = torch.tensor(np.random.normal(0,0.01,(num_hiddens,num_inputs)),dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32,requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01,(num_outputs, num_hiddens)),dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs,dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定義模型結(jié)構(gòu)self.input_layer = lambda x: x.view(x.shape[0],-1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x,w_1.t())+b_1)self.output_layer = lambda x: torch.matmul(x,w_2.t()) + b_2def my_relu(self, x):return torch.max(input=x,other=torch.tensor(0.0))def forward(self, x):flatten_input = self.input_layer(x)hidden_output = self.hidden_layer(flatten_input)final_output = self.output_layer(hidden_output)return final_output# 訓(xùn)練
model1 = Net1()  # logistics模型
criterion = my_cross_entropy_loss
lr = 0.01   # 學(xué)習(xí)率
batchsize = 128 
epochs = 40 #訓(xùn)練輪數(shù)
train_all_loss1 = [] # 記錄訓(xùn)練集上得loss變化
test_all_loss1 = [] #記錄測(cè)試集上的loss變化
begintime1 = time.time()
for epoch in range(epochs):train_l = 0for data, labels in traindataloader1:pred = model1.forward(data)train_each_loss = mse(pred.view(-1,1), labels.view(-1,1)) #計(jì)算每次的損失值train_each_loss.backward() # 反向傳播SGD(model1.params, lr) # 使用小批量隨機(jī)梯度下降迭代模型參數(shù)# 梯度清零train_l += train_each_loss.item()for param in model1.params:param.grad.data.zero_()# print(train_each_loss)train_all_loss1.append(train_l) # 添加損失值到列表中with torch.no_grad():test_loss = 0for data, labels in traindataloader1:pred = model1.forward(data)test_each_loss = mse(pred, labels)test_loss += test_each_loss.item()test_all_loss1.append(test_loss)if epoch==0 or (epoch+1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f'%(epoch+1,train_all_loss1[-1],test_all_loss1[-1]))endtime1 = time.time()
print("手動(dòng)實(shí)現(xiàn)前饋網(wǎng)絡(luò)-回歸實(shí)驗(yàn) %d輪 總用時(shí): %.3fs"%(epochs,endtime1-begintime1))
# 二分類任務(wù)
data_num2, train_num2, test_num2 =10000, 7000, 3000
# 第一個(gè)數(shù)據(jù)集 符合均值為 0.5 標(biāo)準(zhǔn)差為1 得分布
featuresA = torch.normal(mean=0.5, std=1, size=(data_num2, 200), dtype=torch.float32)
labelsA = torch.ones(data_num2)
# 第二個(gè)數(shù)據(jù)集 符合均值為 -0.5 標(biāo)準(zhǔn)差為1的分布
featuresB = torch.normal(mean=-0.5, std=1, size=(data_num2, 200), dtype=torch.float32)
labelsB = torch.zeros(data_num2)# 構(gòu)建訓(xùn)練數(shù)據(jù)集
train_features2 = torch.cat((featuresA[:train_num2], featuresB[:train_num2]), dim=0) 
train_labels2 = torch.cat((labelsA[:train_num2], labelsB[:train_num2]), dim=-1) 
# 構(gòu)建測(cè)試數(shù)據(jù)集
test_features2 = torch.cat((featuresA[train_num2:], featuresB[train_num2:]), dim=0)  
test_labels2 = torch.cat((labelsB[train_num2:], labelsB[train_num2:]), dim=-1) 
batch_size = 128
# Build the training and testing dataset
traindataset2 = torch.utils.data.TensorDataset(train_features2, train_labels2)
testdataset2 = torch.utils.data.TensorDataset(test_features2, test_labels2)
traindataloader2 = torch.utils.data.DataLoader(dataset=traindataset2,batch_size=batch_size,shuffle=True)
testdataloader2 = torch.utils.data.DataLoader(dataset=testdataset2,batch_size=batch_size,shuffle=True)
from torch.nn.functional import binary_cross_entropy
from torch.nn import CrossEntropyLoss
class Net2():def __init__(self):# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)num_inputs, num_hiddens, num_outputs = 200, 256, 1w_1 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_inputs)), dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32, requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01, (num_outputs, num_hiddens)), dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs, dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定義模型結(jié)構(gòu)self.input_layer = lambda x: x.view(x.shape[0], -1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x, w_1.t()) + b_1)self.output_layer = lambda x: torch.matmul(x, w_2.t()) + b_2self.fn_logistic = self.logisticdef my_relu(self, x):return torch.max(input=x, other=torch.tensor(0.0))def logistic(self, x):  # 定義logistic函數(shù)x = 1.0 / (1.0 + torch.exp(-x))return x# 定義前向傳播def forward(self, x):x = self.input_layer(x)x = self.my_relu(self.hidden_layer(x))x = self.fn_logistic(self.output_layer(x))return x# 訓(xùn)練
model2 = Net2()
lr = 0.005  # 學(xué)習(xí)率
epochs = 40  # 訓(xùn)練輪數(shù)
train_all_loss2 = []  # 記錄訓(xùn)練集上得loss變化
test_all_loss2 = []  # 記錄測(cè)試集上的loss變化
train_Acc12, test_Acc12 = [], []
begintime2 = time.time()
for epoch in range(epochs):train_l, train_epoch_count = 0, 0for data, labels in traindataloader2:pred = model2.forward(data)train_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))  # 計(jì)算每次的損失值train_l += train_each_loss.item()train_each_loss.backward()  # 反向傳播SGD(model2.params, lr)  # 使用隨機(jī)梯度下降迭代模型參數(shù)# 梯度清零for param in model2.params:param.grad.data.zero_()# print(train_each_loss)train_epoch_count += (pred.argmax(dim=1) == labels).sum()train_Acc12.append((train_epoch_count/len(traindataset2)).item())train_all_loss2.append(train_l)  # 添加損失值到列表中with torch.no_grad():test_l, test_epoch_count = 0, 0for data, labels in testdataloader2:pred = model2.forward(data)test_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))test_l += test_each_loss.item()train_epoch_count += (pred.argmax(dim=1) == labels).sum()test_Acc12.append((test_epoch_count/len(testdataset2)).item())test_all_loss2.append(test_l)if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc:%.5f | test acc:%.5f'  % (epoch + 1, train_all_loss2[-1], test_all_loss2[-1], train_Acc12[-1], test_Acc12[-1]))
endtime2 = time.time()
print("手動(dòng)實(shí)現(xiàn)前饋網(wǎng)絡(luò)-二分類實(shí)驗(yàn) %d輪 總用時(shí): %.3f" % (epochs, endtime2 - begintime2))
# 多分類任務(wù)
mnist_train3 = torchvision.datasets.FashionMNIST(root='./FashionMNIST', train=True, download=True, transform=transforms.ToTensor())
mnist_test3 = torchvision.datasets.FashionMNIST(root='./FashionMNIST', train=False, download=True, transform=transforms.ToTensor())
batch_size = 256
train_iter3 = torch.utils.data.DataLoader(mnist_train3, batch_size=batch_size, shuffle=True, num_workers=0)
test_iter3 = torch.utils.data.DataLoader(mnist_test3, batch_size=batch_size, shuffle=False, num_workers=0)
traindataset3 = torchvision.datasets.FashionMNIST(root='E:\\DataSet\\FashionMNIST\\Train',train=True,download=True,transform=transforms.ToTensor())
testdataset3 = torchvision.datasets.FashionMNIST(root='E:\\DataSet\\FashionMNIST\\Test',train=False,download=True,transform=transforms.ToTensor())
traindataloader3 = torch.utils.data.DataLoader(traindataset3, batch_size=batch_size, shuffle=True)
testdataloader3 = torch.utils.data.DataLoader(testdataset3, batch_size=batch_size, shuffle=False)
# 定義自己的前饋神經(jīng)網(wǎng)絡(luò)
class MyNet3():def __init__(self):# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)num_inputs, num_hiddens, num_outputs = 28 * 28, 256, 10  # 十分類問題w_1 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_inputs)), dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32, requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01, (num_outputs, num_hiddens)), dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs, dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定義模型結(jié)構(gòu)self.input_layer = lambda x: x.view(x.shape[0], -1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x, w_1.t()) + b_1)self.output_layer = lambda x: torch.matmul(x, w_2.t()) + b_2def my_relu(self, x):return torch.max(input=x, other=torch.tensor(0.0))# 定義前向傳播def forward(self, x):x = self.input_layer(x)x = self.hidden_layer(x)x = self.output_layer(x)return xdef mySGD(params, lr, batchsize):for param in params:param.data -= lr * param.grad / batchsize# 訓(xùn)練
model3 = MyNet3()  # logistics模型
criterion = my_cross_entropy_loss  # 損失函數(shù)
lr = 0.15  # 學(xué)習(xí)率
epochs = 40  # 訓(xùn)練輪數(shù)
train_all_loss3 = []  # 記錄訓(xùn)練集上得loss變化
test_all_loss3 = []  # 記錄測(cè)試集上的loss變化
train_ACC13, test_ACC13 = [], [] # 記錄正確的個(gè)數(shù)
begintime3 = time.time()
for epoch in range(epochs):train_l,train_acc_num = 0, 0for data, labels in traindataloader3:pred = model3.forward(data)train_each_loss = criterion(pred, labels)  # 計(jì)算每次的損失值train_l += train_each_loss.item()train_each_loss.backward()  # 反向傳播mySGD(model3.params, lr, 128)  # 使用小批量隨機(jī)梯度下降迭代模型參數(shù)# 梯度清零train_acc_num += (pred.argmax(dim=1)==labels).sum().item()for param in model3.params:param.grad.data.zero_()# print(train_each_loss)train_all_loss3.append(train_l)  # 添加損失值到列表中train_ACC13.append(train_acc_num / len(traindataset3)) # 添加準(zhǔn)確率到列表中with torch.no_grad():test_l, test_acc_num = 0, 0for data, labels in testdataloader3:pred = model3.forward(data)test_each_loss = criterion(pred, labels)test_l += test_each_loss.item()test_acc_num += (pred.argmax(dim=1)==labels).sum().item()test_all_loss3.append(test_l)test_ACC13.append(test_acc_num / len(testdataset3))   # # 添加準(zhǔn)確率到列表中if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc: %.2f | test acc: %.2f'% (epoch + 1, train_l, test_l, train_ACC13[-1],test_ACC13[-1]))
endtime3 = time.time()
print("手動(dòng)實(shí)現(xiàn)前饋網(wǎng)絡(luò)-多分類實(shí)驗(yàn) %d輪 總用時(shí): %.3f" % (epochs, endtime3 - begintime3))
# 結(jié)果分析
def picture(name, trainl, testl, type='Loss'):plt.rcParams["font.sans-serif"]=["SimHei"] #設(shè)置字體plt.rcParams["axes.unicode_minus"]=False #該語句解決圖像中的“-”負(fù)號(hào)的亂碼問題plt.title(name) # 命名plt.plot(trainl, c='g', label='Train '+ type)plt.plot(testl, c='r', label='Test '+type)plt.xlabel('Epoch')plt.ylabel('Loss')plt.legend()plt.grid(True)plt.figure(figsize=(12,3))
plt.title('Loss')
plt.subplot(131)
picture('前饋網(wǎng)絡(luò)-回歸-損失曲線',train_all_loss1,test_all_loss1)
plt.subplot(132)
picture('前饋網(wǎng)絡(luò)-二分類-損失曲線',train_all_loss2,test_all_loss2)
plt.subplot(133)
picture('前饋網(wǎng)絡(luò)-多分類-損失曲線',train_all_loss3,test_all_loss3)
plt.show()# 繪制表格
plt.figure(figsize=(8, 3))
plt.subplot(121)
picture('前饋網(wǎng)絡(luò)-二分類-準(zhǔn)確度',train_Acc12,test_Acc12,type='ACC')
plt.subplot(122)
picture('前饋網(wǎng)絡(luò)-多分類—準(zhǔn)確度', train_ACC13,test_ACC13, type='ACC')
plt.show()

二、利用torch.nn實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)解決回歸、二分類、多分類任務(wù)

2.1 任務(wù)內(nèi)容

從訓(xùn)練時(shí)間、預(yù)測(cè)精度、Loss變化等角度分析實(shí)驗(yàn)結(jié)果(最好使用圖表展示)

2.2 任務(wù)思路及代碼

from torch.nn import MSELoss
from torch.optim import SGD
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 回歸任務(wù)
class MyNet21(nn.Module):def __init__(self):super(MyNet21, self).__init__()# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)num_inputs, num_hiddens, num_outputs = 500, 256, 1# 定義模型結(jié)構(gòu)self.input_layer = nn.Flatten()self.hidden_layer = nn.Linear(num_inputs, num_hiddens)self.output_layer = nn.Linear(num_hiddens, num_outputs)self.relu = nn.ReLU()# 定義前向傳播def forward(self, x):x = self.input_layer(x)x = self.relu(self.hidden_layer(x))x = self.output_layer(x)return x# 訓(xùn)練
model21 = MyNet21()  # logistics模型
model21 = model21.to(device)
print(model21)
criterion = MSELoss()  # 損失函數(shù)
criterion = criterion.to(device)
optimizer = SGD(model21.parameters(), lr=0.1)  # 優(yōu)化函數(shù)
epochs = 40  # 訓(xùn)練輪數(shù)
train_all_loss21 = []  # 記錄訓(xùn)練集上得loss變化
test_all_loss21 = []  # 記錄測(cè)試集上的loss變化
begintime21 = time.time()
for epoch in range(epochs):train_l = 0for data, labels in traindataloader1:data, labels = data.to(device=device), labels.to(device)pred = model21(data)train_each_loss = criterion(pred.view(-1, 1), labels.view(-1, 1))  # 計(jì)算每次的損失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向傳播optimizer.step()  # 梯度更新train_l += train_each_loss.item()train_all_loss21.append(train_l)  # 添加損失值到列表中with torch.no_grad():test_loss = 0for data, labels in testdataloader1:data, labels = data.to(device), labels.to(device)pred = model21(data)test_each_loss = criterion(pred,labels)test_loss += test_each_loss.item()test_all_loss21.append(test_loss)if epoch == 0 or (epoch + 1) % 10 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f' % (epoch + 1, train_all_loss21[-1], test_all_loss21[-1]))
endtime21 = time.time()
print("torch.nn實(shí)現(xiàn)前饋網(wǎng)絡(luò)-回歸實(shí)驗(yàn) %d輪 總用時(shí): %.3fs" % (epochs, endtime21 - begintime21))
class MyNet22(nn.Module):def __init__(self):super(MyNet22, self).__init__()# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)num_inputs, num_hiddens, num_outputs = 200, 256, 1# 定義模型結(jié)構(gòu)self.input_layer = nn.Flatten()self.hidden_layer = nn.Linear(num_inputs, num_hiddens)self.output_layer = nn.Linear(num_hiddens, num_outputs)self.relu = nn.ReLU()def logistic(self, x):  # 定義logistic函數(shù)x = 1.0 / (1.0 + torch.exp(-x))return x# 定義前向傳播def forward(self, x):x = self.input_layer(x)x = self.relu(self.hidden_layer(x))x = self.logistic(self.output_layer(x))return x# 訓(xùn)練
model22 = MyNet22()  # logistics模型
model22 = model22.to(device)
print(model22)
optimizer = SGD(model22.parameters(), lr=0.001)  # 優(yōu)化函數(shù)
epochs = 40  # 訓(xùn)練輪數(shù)
train_all_loss22 = []  # 記錄訓(xùn)練集上得loss變化
test_all_loss22 = []  # 記錄測(cè)試集上的loss變化
train_ACC22, test_ACC22 = [], []
begintime22 = time.time()
for epoch in range(epochs):train_l, train_epoch_count, test_epoch_count = 0, 0, 0 # 每一輪的訓(xùn)練損失值 訓(xùn)練集正確個(gè)數(shù) 測(cè)試集正確個(gè)數(shù)for data, labels in traindataloader2:data, labels = data.to(device), labels.to(device)pred = model22(data)train_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))  # 計(jì)算每次的損失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向傳播optimizer.step()  # 梯度更新train_l += train_each_loss.item()pred = torch.tensor(np.where(pred.cpu()>0.5, 1, 0))  # 大于 0.5時(shí)候,預(yù)測(cè)標(biāo)簽為 1 否則為0each_count = (pred.view(-1) == labels.cpu()).sum() # 每一個(gè)batchsize的正確個(gè)數(shù)train_epoch_count += each_count # 計(jì)算每個(gè)epoch上的正確個(gè)數(shù)train_ACC22.append(train_epoch_count / len(traindataset2))train_all_loss22.append(train_l)  # 添加損失值到列表中with torch.no_grad():test_loss, each_count = 0, 0for data, labels in testdataloader2:data, labels = data.to(device), labels.to(device)pred = model22(data)test_each_loss = binary_cross_entropy(pred.view(-1),labels)test_loss += test_each_loss.item()# .cpu 為轉(zhuǎn)換到cpu上計(jì)算pred = torch.tensor(np.where(pred.cpu() > 0.5, 1, 0))each_count = (pred.view(-1)==labels.cpu().view(-1)).sum()test_epoch_count += each_counttest_all_loss22.append(test_loss)test_ACC22.append(test_epoch_count / len(testdataset2))if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f test loss:%.5f | train acc:%.5f | test acc:%.5f' % (epoch + 1, train_all_loss22[-1], test_all_loss22[-1], train_ACC22[-1], test_ACC22[-1]))endtime22 = time.time()
print("torch.nn實(shí)現(xiàn)前饋網(wǎng)絡(luò)-二分類實(shí)驗(yàn) %d輪 總用時(shí): %.3fs" % (epochs, endtime22 - begintime22))
from torch.nn import CrossEntropyLoss
# 定義自己的前饋神經(jīng)網(wǎng)絡(luò)
class MyNet23(nn.Module):def __init__(self,num_hiddenlayer=1, num_inputs=28*28,num_hiddens=[256],num_outs=10,act='relu'):super(MyNet23, self).__init__()# 設(shè)置隱藏層和輸出層的節(jié)點(diǎn)數(shù)self.num_inputs, self.num_hiddens, self.num_outputs = num_inputs,num_hiddens,num_outs # 十分類問題# 定義模型結(jié)構(gòu)self.input_layer = nn.Flatten()# 若只有一層隱藏層if num_hiddenlayer ==1:self.hidden_layers = nn.Linear(self.num_inputs,self.num_hiddens[-1])else: # 若有多個(gè)隱藏層self.hidden_layers = nn.Sequential()self.hidden_layers.add_module("hidden_layer1", nn.Linear(self.num_inputs,self.num_hiddens[0]))for i in range(0,num_hiddenlayer-1):name = str('hidden_layer'+str(i+2))self.hidden_layers.add_module(name, nn.Linear(self.num_hiddens[i],self.num_hiddens[i+1]))self.output_layer = nn.Linear(self.num_hiddens[-1], self.num_outputs)# 指代需要使用什么樣子的激活函數(shù)if act == 'relu':self.act = nn.ReLU()elif act == 'sigmoid':self.act = nn.Sigmoid()elif act == 'tanh':self.act = nn.Tanh()elif act == 'elu':self.act = nn.ELU()print(f'本次使用的激活函數(shù)為 {act}')def logistic(self, x):  # 定義logistic函數(shù)x = 1.0 / (1.0 + torch.exp(-x))return x# 定義前向傳播def forward(self, x):x = self.input_layer(x)x = self.act(self.hidden_layers(x))x = self.output_layer(x)return x# 訓(xùn)練
# 使用默認(rèn)的參數(shù)即: num_inputs=28*28,num_hiddens=256,num_outs=10,act='relu'
model23 = MyNet23()  
model23 = model23.to(device)# 將訓(xùn)練過程定義為一個(gè)函數(shù),方便實(shí)驗(yàn)三和實(shí)驗(yàn)四調(diào)用
def train_and_test(model=model23):MyModel = modelprint(MyModel)optimizer = SGD(MyModel.parameters(), lr=0.01)  # 優(yōu)化函數(shù)epochs = 40  # 訓(xùn)練輪數(shù)criterion = CrossEntropyLoss() # 損失函數(shù)train_all_loss23 = []  # 記錄訓(xùn)練集上得loss變化test_all_loss23 = []  # 記錄測(cè)試集上的loss變化train_ACC23, test_ACC23 = [], []begintime23 = time.time()for epoch in range(epochs):train_l, train_epoch_count, test_epoch_count = 0, 0, 0for data, labels in traindataloader3:data, labels = data.to(device), labels.to(device)pred = MyModel(data)train_each_loss = criterion(pred, labels.view(-1))  # 計(jì)算每次的損失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向傳播optimizer.step()  # 梯度更新train_l += train_each_loss.item()train_epoch_count += (pred.argmax(dim=1)==labels).sum()train_ACC23.append(train_epoch_count.cpu()/len(traindataset3))train_all_loss23.append(train_l)  # 添加損失值到列表中with torch.no_grad():test_loss, test_epoch_count= 0, 0for data, labels in testdataloader3:data, labels = data.to(device), labels.to(device)pred = MyModel(data)test_each_loss = criterion(pred,labels)test_loss += test_each_loss.item()test_epoch_count += (pred.argmax(dim=1)==labels).sum()test_all_loss23.append(test_loss)test_ACC23.append(test_epoch_count.cpu()/len(testdataset3))if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc:%5f test acc:%.5f:' % (epoch + 1, train_all_loss23[-1], test_all_loss23[-1],train_ACC23[-1],test_ACC23[-1]))endtime23 = time.time()print("torch.nn實(shí)現(xiàn)前饋網(wǎng)絡(luò)-多分類任務(wù) %d輪 總用時(shí): %.3fs" % (epochs, endtime23 - begintime23))# 返回訓(xùn)練集和測(cè)試集上的 損失值 與 準(zhǔn)確率return train_all_loss23,test_all_loss23,train_ACC23,test_ACC23train_all_loss23,test_all_loss23,train_ACC23,test_ACC23 = train_and_test(model=model23)

三、在多分類任務(wù)中使用至少三種不同的激活函數(shù)

3.1 任務(wù)內(nèi)容

使用不同的激活函數(shù),進(jìn)行對(duì)比實(shí)驗(yàn)并分析實(shí)驗(yàn)結(jié)果

3.2 任務(wù)思路及代碼

# 使用實(shí)驗(yàn)二中多分類的模型定義其激活函數(shù)為 Tanh
model31 = MyNet23(1,28*28,[256],10,act='tanh') 
model31 = model31.to(device)train_all_loss31,test_all_loss31,train_ACC31,test_ACC31 = train_and_test(model=model31)
# 使用實(shí)驗(yàn)二中多分類的模型定義其激活函數(shù)為 Sigmoid
model32 = MyNet23(1,28*28,[256],10,act='sigmoid') 
model32 = model32.to(device)train_all_loss32,test_all_loss32,train_ACC32,test_ACC32 = train_and_test(model=model32)
# 使用實(shí)驗(yàn)二中多分類的模型定義其激活函數(shù)為 ELU
model33 = MyNet23(1,28*28,[256],10,act='elu') 
model33 = model33.to(device) train_all_loss33,test_all_loss33,train_ACC33,test_ACC33 = train_and_test(model=model33)
def Plot3(datalist,title='1',ylabel='Loss',flag='act'):plt.rcParams["font.sans-serif"]=["SimHei"] #設(shè)置字體plt.rcParams["axes.unicode_minus"]=False #該語句解決圖像中的“-”負(fù)號(hào)的亂碼問題plt.title(title)plt.xlabel('Epoch')plt.ylabel(ylabel)plt.plot(datalist[0],label='Tanh' if flag=='act' else '[128]')plt.plot(datalist[1],label='Sigmoid' if flag=='act' else '[512 256]')plt.plot(datalist[2],label='ELu' if flag=='act' else '[512 256 128 64]')plt.plot(datalist[3],label='Relu' if flag=='act' else '[256]')plt.legend()plt.grid(True)
plt.figure(figsize=(16,3))
plt.subplot(141)
Plot3([train_all_loss31,train_all_loss32,train_all_loss33,train_all_loss23],title='Train_Loss')
plt.subplot(142)
Plot3([test_all_loss31,test_all_loss32,test_all_loss33,test_all_loss23],title='Test_Loss')
plt.subplot(143)
Plot3([train_ACC31,train_ACC32,train_ACC33,train_ACC23],title='Train_ACC')
plt.subplot(144)
Plot3([test_ACC31,test_ACC32,test_ACC33,test_ACC23],title='Test_ACC')
plt.show()

四、在多分類任務(wù)中使用至少三種不同的激活函數(shù)

4.1 任務(wù)內(nèi)容

使用不同的隱藏層層數(shù)和隱藏單元個(gè)數(shù),進(jìn)行對(duì)比實(shí)驗(yàn)并分析實(shí)驗(yàn)結(jié)果

# 使用實(shí)驗(yàn)二中多分類的模型  一個(gè)隱藏層,神經(jīng)元個(gè)數(shù)為[128]
model41 = MyNet23(1,28*28,[128],10,act='relu') 
model41 = model41.to(device) train_all_loss41,test_all_loss41,train_ACC41,test_ACC41 = train_and_test(model=model41)
# 使用實(shí)驗(yàn)二中多分類的模型 兩個(gè)隱藏層,神經(jīng)元個(gè)數(shù)為[512,256]
model42 = MyNet23(2,28*28,[512,256],10,act='relu') 
model42 = model42.to(device)train_all_loss42,test_all_loss42,train_ACC42,test_ACC42 = train_and_test(model=model42)
# 使用實(shí)驗(yàn)二中多分類的模型  四個(gè)隱藏層,神經(jīng)元個(gè)數(shù)為[512,256,128,64]
model43 = MyNet23(3,28*28,[512,256,128],10,act='relu') 
model43 = model43.to(device) train_all_loss43,test_all_loss43,train_ACC43,test_ACC43 = train_and_test(model=model43)
plt.figure(figsize=(16,3))
plt.subplot(141)
Plot3([train_all_loss41,train_all_loss42,train_all_loss43,train_all_loss23],title='Train_Loss',flag='hidden')
plt.subplot(142)
Plot3([test_all_loss41,test_all_loss42,test_all_loss43,test_all_loss23],title='Test_Loss',flag='hidden')
plt.subplot(143)
Plot3([train_ACC41,train_ACC42,train_ACC43,train_ACC23],title='Train_ACC',flag='hidden')
plt.subplot(144)
Plot3([test_ACC41,test_ACC42,test_ACC43,test_ACC23],title='Test_ACC', flag='hidden')
plt.show()

實(shí)驗(yàn)結(jié)果分析

  1. 從訓(xùn)練時(shí)間大致可以看出:隱藏層數(shù)越多,隱藏神經(jīng)元個(gè)數(shù)越多,訓(xùn)練成本越高,所需要的時(shí)間越久。
  2. 從準(zhǔn)確率來看,準(zhǔn)確率越高,可能會(huì)有相反的效果,并不是隱藏層數(shù)越多,隱藏神經(jīng)元個(gè)數(shù)越多。更多的隱藏層和隱藏神經(jīng)元個(gè)數(shù),可能會(huì)導(dǎo)致模型的過擬合現(xiàn)象,導(dǎo)致在訓(xùn)練集上準(zhǔn)確率很高,但在測(cè)試集上準(zhǔn)確率很低。
http://www.risenshineclean.com/news/32440.html

相關(guān)文章:

  • 互聯(lián)網(wǎng)行業(yè)新聞的靠譜網(wǎng)站怎么做屬于自己的網(wǎng)站
  • 凡科刪除建設(shè)的網(wǎng)站東莞疫情最新通知
  • 做網(wǎng)站銷售電話術(shù)語關(guān)鍵詞推廣是什么
  • 如何在國(guó)外做網(wǎng)站競(jìng)價(jià)賬戶托管公司哪家好
  • 為校園網(wǎng)站建設(shè)提供網(wǎng)站優(yōu)化排名公司哪家好
  • 寧波百度做網(wǎng)站的公司哪家好亞馬遜跨境電商開店流程及費(fèi)用
  • 電商型企業(yè)網(wǎng)站建設(shè)品牌型網(wǎng)站制作價(jià)格
  • 企業(yè)內(nèi)部網(wǎng)站如何建設(shè)吉林網(wǎng)絡(luò)seo
  • 流量劫持網(wǎng)站怎么做磁力搜索引擎不死鳥
  • 公司官網(wǎng)怎么建立優(yōu)化大師客服電話
  • php網(wǎng)站的登陸注冊(cè)怎末做的深圳網(wǎng)站建設(shè)專業(yè)樂云seo
  • 業(yè)余從事網(wǎng)站開發(fā)杭州seo按天計(jì)費(fèi)
  • 招聘網(wǎng)站建設(shè)維護(hù)人員設(shè)計(jì)公司排名前十強(qiáng)
  • 錦州網(wǎng)站建設(shè)哪家好seo策略有哪些
  • 金山建設(shè)機(jī)械網(wǎng)站seo研究中心超逸seo
  • 網(wǎng)站備案后更換主機(jī)網(wǎng)絡(luò)營(yíng)銷的內(nèi)涵
  • php做視頻網(wǎng)站有哪些軟件下載谷歌優(yōu)化的網(wǎng)絡(luò)公司
  • 廣告行業(yè)包括網(wǎng)站建設(shè)嗎搜索seo優(yōu)化
  • 表格上傳網(wǎng)站廣東seo點(diǎn)擊排名軟件哪家好
  • 獵頭做mapping網(wǎng)站推廣軟件的app
  • 溫州網(wǎng)站建設(shè)推廣nba哈登最新消息
  • 完善酒店網(wǎng)站建設(shè)線上推廣策劃方案
  • 大興網(wǎng)站建設(shè)推廣網(wǎng)站seo的方法
  • 做團(tuán)建活動(dòng)網(wǎng)站網(wǎng)站優(yōu)化什么意思
  • 規(guī)模以上工業(yè)企業(yè)名單百度小程序關(guān)鍵詞優(yōu)化
  • 建設(shè)銀行網(wǎng)站不能登錄密碼seo文章是什么
  • 東莞建站模板后臺(tái)百度快照功能
  • php網(wǎng)站后臺(tái)反應(yīng)慢怎么解決推廣網(wǎng)頁怎么做的
  • 上海網(wǎng)站制作培訓(xùn)分享推廣
  • 浙江網(wǎng)緣科技有限公司seo點(diǎn)擊排名