手機(jī)怎么免費(fèi)建網(wǎng)站專業(yè)seo培訓(xùn)
- 🍨 本文為🔗365天深度學(xué)習(xí)訓(xùn)練營 中的學(xué)習(xí)記錄博客R8中的內(nèi)容,為了便于自己整理總結(jié)起名為R6
- 🍖 原作者:K同學(xué)啊 | 接輔導(dǎo)、項(xiàng)目定制
目錄
- 0. 總結(jié)
- 1. 數(shù)據(jù)集介紹
- 2. 數(shù)據(jù)預(yù)處理
- 3. 模型構(gòu)建
- 4. 初始化模型及優(yōu)化器
- 5. 訓(xùn)練函數(shù)
- 6. 測(cè)試函數(shù)
- 7. 模型評(píng)估
- 8. 模型保存及加載
- 9. 使用訓(xùn)練好的模型進(jìn)行預(yù)測(cè)
0. 總結(jié)
數(shù)據(jù)導(dǎo)入及處理部分:在 PyTorch 中,我們通常先將 NumPy 數(shù)組轉(zhuǎn)換為 torch.Tensor,再封裝到 TensorDataset 或自定義的 Dataset 里,然后用 DataLoader 按批次加載。
模型構(gòu)建部分:RNN
設(shè)置超參數(shù):在這之前需要定義損失函數(shù),學(xué)習(xí)率(動(dòng)態(tài)學(xué)習(xí)率),以及根據(jù)學(xué)習(xí)率定義優(yōu)化器(例如SGD隨機(jī)梯度下降),用來在訓(xùn)練中更新參數(shù),最小化損失函數(shù)。
定義訓(xùn)練函數(shù):函數(shù)的傳入的參數(shù)有四個(gè),分別是設(shè)置好的DataLoader(),定義好的模型,損失函數(shù),優(yōu)化器。函數(shù)內(nèi)部初始化損失準(zhǔn)確率為0,接著開始循環(huán),使用DataLoader()獲取一個(gè)批次的數(shù)據(jù),對(duì)這個(gè)批次的數(shù)據(jù)帶入模型得到預(yù)測(cè)值,然后使用損失函數(shù)計(jì)算得到損失值。接下來就是進(jìn)行反向傳播以及使用優(yōu)化器優(yōu)化參數(shù),梯度清零放在反向傳播之前或者是使用優(yōu)化器優(yōu)化之后都是可以的,一般是默認(rèn)放在反向傳播之前。
定義測(cè)試函數(shù):函數(shù)傳入的參數(shù)相比訓(xùn)練函數(shù)少了優(yōu)化器,只需傳入設(shè)置好的DataLoader(),定義好的模型,損失函數(shù)。此外除了處理批次數(shù)據(jù)時(shí)無需再設(shè)置梯度清零、返向傳播以及優(yōu)化器優(yōu)化參數(shù),其余部分均和訓(xùn)練函數(shù)保持一致。
訓(xùn)練過程:定義訓(xùn)練次數(shù),有幾次就使用整個(gè)數(shù)據(jù)集進(jìn)行幾次訓(xùn)練,初始化四個(gè)空list分別存儲(chǔ)每次訓(xùn)練及測(cè)試的準(zhǔn)確率及損失。使用model.train()開啟訓(xùn)練模式,調(diào)用訓(xùn)練函數(shù)得到準(zhǔn)確率及損失。使用model.eval()將模型設(shè)置為評(píng)估模式,調(diào)用測(cè)試函數(shù)得到準(zhǔn)確率及損失。接著就是將得到的訓(xùn)練及測(cè)試的準(zhǔn)確率及損失存儲(chǔ)到相應(yīng)list中并合并打印出來,得到每一次整體訓(xùn)練后的準(zhǔn)確率及損失。
結(jié)果可視化
模型的保存,調(diào)取及使用。在PyTorch中,通常使用 torch.save(model.state_dict(), ‘model.pth’) 保存模型的參數(shù),使用 model.load_state_dict(torch.load(‘model.pth’)) 加載參數(shù)。
需要改進(jìn)優(yōu)化的地方:確保模型和數(shù)據(jù)的一致性,都存到GPU或者CPU;注意numclasses不要直接用默認(rèn)的1000,需要根據(jù)實(shí)際數(shù)據(jù)集改進(jìn);實(shí)例化模型也要注意numclasses這個(gè)參數(shù);此外注意測(cè)試模型需要用(3,224,224)3表示通道數(shù),這和tensorflow定義的順序是不用的(224,224,3),做代碼轉(zhuǎn)換時(shí)需要注意。
關(guān)于優(yōu)化:
目前的嘗試: 可以采用采用了L2正則化及dropout
1. 數(shù)據(jù)集介紹
數(shù)據(jù)信息:
-
PatientID:分配給每個(gè)患者(4751 到 6900)的唯一標(biāo)識(shí)符。
-
Age: 患者的年齡從 60 歲到 90 歲不等。
-
Gender:患者的性別,其中 0 代表男性,1 代表女性。
-
Ethnicity:患者的種族,編碼如下:
0: 高加索人
1:非裔美國人
2:亞洲
3:其他
- EducationLevel:患者的教育水平,編碼如下:
:無
1:高中
2:學(xué)士學(xué)位
3:更高
-
BMI:患者的體重指數(shù),范圍從 15 到 40。
-
Smoking:吸煙狀態(tài),其中 0 表示否,1 表示是。
-
AlcoholConsumption (酒精消費(fèi)量):每周酒精消費(fèi)量(以 0 到 20 為單位),范圍從 0 到 20。
-
PhysicalActivity:每周身體活動(dòng)(以小時(shí)為單位),范圍從 0 到 10。
-
DietQuality:飲食質(zhì)量評(píng)分,范圍從 0 到 10。
-
SleepQuality:睡眠質(zhì)量分?jǐn)?shù),范圍從 4 到 10。
-
FamilyHistoryAlzheimers::阿爾茨海默病家族史,其中 0 表示否,1 表示是。
-
CardiovascularDisease:存在心血管疾病,其中 0 表示否,1 表示是。
-
Diabetes:存在糖尿病,其中 0 表示否,1 表示是。
-
Depression:存在抑郁,其中 0 表示否,1 表示是。
-
HeadInjury:頭部受傷史,其中 0 表示否,1 表示是。
-
Hypertension:存在高血壓,其中 0 表示否,1 表示是。
-
SystolicBP:收縮壓,范圍為 90 至 180 mmHg。
-
DiastolicBP: 舒張壓,范圍為 60 至 120 mmHg。
-
CholesterolTotal:總膽固醇水平,范圍為 150 至 300 mg/dL。
-
CholesterolLDL:低密度脂蛋白膽固醇水平,范圍為 50 至 200 mg/dL。
-
CholesterolHDL:高密度脂蛋白膽固醇水平,范圍為 20 至 100 mg/dL。
-
CholesterolTriglycerides:甘油三酯水平,范圍為 50 至 400 mg/dL。
…
- Diagnosis:阿爾茨海默病的診斷狀態(tài),其中 0 表示否,1 表示是。
2. 數(shù)據(jù)預(yù)處理
import numpy as np
import pandas as pd
import torch
from torch import nn
import torch.nn.functional as F
import seaborn as sns#設(shè)置GPU訓(xùn)練,也可以使用CPU
device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
device(type='cuda')
# 數(shù)據(jù)導(dǎo)入
df = pd.read_csv("./data/alzheimers_disease_data.csv")
# 刪除第一列和最后一列
df = df.iloc[:, 1:-1]
df
Age | Gender | Ethnicity | EducationLevel | BMI | Smoking | AlcoholConsumption | PhysicalActivity | DietQuality | SleepQuality | ... | FunctionalAssessment | MemoryComplaints | BehavioralProblems | ADL | Confusion | Disorientation | PersonalityChanges | DifficultyCompletingTasks | Forgetfulness | Diagnosis | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 73 | 0 | 0 | 2 | 22.927749 | 0 | 13.297218 | 6.327112 | 1.347214 | 9.025679 | ... | 6.518877 | 0 | 0 | 1.725883 | 0 | 0 | 0 | 1 | 0 | 0 |
1 | 89 | 0 | 0 | 0 | 26.827681 | 0 | 4.542524 | 7.619885 | 0.518767 | 7.151293 | ... | 7.118696 | 0 | 0 | 2.592424 | 0 | 0 | 0 | 0 | 1 | 0 |
2 | 73 | 0 | 3 | 1 | 17.795882 | 0 | 19.555085 | 7.844988 | 1.826335 | 9.673574 | ... | 5.895077 | 0 | 0 | 7.119548 | 0 | 1 | 0 | 1 | 0 | 0 |
3 | 74 | 1 | 0 | 1 | 33.800817 | 1 | 12.209266 | 8.428001 | 7.435604 | 8.392554 | ... | 8.965106 | 0 | 1 | 6.481226 | 0 | 0 | 0 | 0 | 0 | 0 |
4 | 89 | 0 | 0 | 0 | 20.716974 | 0 | 18.454356 | 6.310461 | 0.795498 | 5.597238 | ... | 6.045039 | 0 | 0 | 0.014691 | 0 | 0 | 1 | 1 | 0 | 0 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
2144 | 61 | 0 | 0 | 1 | 39.121757 | 0 | 1.561126 | 4.049964 | 6.555306 | 7.535540 | ... | 0.238667 | 0 | 0 | 4.492838 | 1 | 0 | 0 | 0 | 0 | 1 |
2145 | 75 | 0 | 0 | 2 | 17.857903 | 0 | 18.767261 | 1.360667 | 2.904662 | 8.555256 | ... | 8.687480 | 0 | 1 | 9.204952 | 0 | 0 | 0 | 0 | 0 | 1 |
2146 | 77 | 0 | 0 | 1 | 15.476479 | 0 | 4.594670 | 9.886002 | 8.120025 | 5.769464 | ... | 1.972137 | 0 | 0 | 5.036334 | 0 | 0 | 0 | 0 | 0 | 1 |
2147 | 78 | 1 | 3 | 1 | 15.299911 | 0 | 8.674505 | 6.354282 | 1.263427 | 8.322874 | ... | 5.173891 | 0 | 0 | 3.785399 | 0 | 0 | 0 | 0 | 1 | 1 |
2148 | 72 | 0 | 0 | 2 | 33.289738 | 0 | 7.890703 | 6.570993 | 7.941404 | 9.878711 | ... | 6.307543 | 0 | 1 | 8.327563 | 0 | 1 | 0 | 0 | 1 | 0 |
2149 rows × 33 columns
# 標(biāo)準(zhǔn)化
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_splitX = df.iloc[:,:-1]
y = df.iloc[:,-1]# 將每一列特征標(biāo)準(zhǔn)化為標(biāo)準(zhǔn)正太分布,注意,標(biāo)準(zhǔn)化是針對(duì)每一列而言的
sc = StandardScaler()
X = sc.fit_transform(X)
# 劃分?jǐn)?shù)據(jù)集
X = torch.tensor(np.array(X),dtype = torch.float32)
y = torch.tensor(np.array(y),dtype = torch.int64)X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state = 1)
X_train.shape,y_train.shape
(torch.Size([1934, 32]), torch.Size([1934]))
# 構(gòu)建數(shù)據(jù)加載器
from torch.utils.data import TensorDataset,DataLoadertrain_dl = DataLoader(TensorDataset(X_train,y_train),batch_size = 64,shuffle = False)
test_dl = DataLoader(TensorDataset(X_test,y_test),batch_size = 64,shuffle = False)
3. 模型構(gòu)建
class model_rnn(nn.Module):def __init__(self):super(model_rnn, self).__init__()self.rnn0 = nn.RNN(input_size=32, hidden_size=200, num_layers=1, batch_first=True)self.fc0 = nn.Linear(200, 50)self.fc1 = nn.Linear(50, 2)def forward(self, x):out, hidden1 = self.rnn0(x) out = self.fc0(out) out = self.fc1(out) return out model = model_rnn().to(device)
model
model_rnn((rnn0): RNN(32, 200, batch_first=True)(fc0): Linear(in_features=200, out_features=50, bias=True)(fc1): Linear(in_features=50, out_features=2, bias=True)
)
model(torch.rand(30,32).to(device)).shape
torch.Size([30, 2])
4. 初始化模型及優(yōu)化器
model = model_rnn().to(device)
print(model)loss_fn = nn.CrossEntropyLoss() # 創(chuàng)建損失函數(shù)
weight_decay = 1e-4 # 嘗試加入權(quán)重衰減;一般來說,較小的值(如1e-5到1e-4)就可以起到一定的正則化作用。
# weight_decay = 1e-3
learn_rate = 1e-3 # 學(xué)習(xí)率
# learn_rate = 3e-4 # 學(xué)習(xí)率
lambda1 = lambda epoch:(0.92**(epoch//2))optimizer = torch.optim.Adam(model.parameters(),lr = learn_rate, weight_decay = 1e-4)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer,lr_lambda=lambda1) # 選定調(diào)整方法
epochs = 50
model_rnn((rnn0): RNN(32, 200, batch_first=True)(fc0): Linear(in_features=200, out_features=50, bias=True)(fc1): Linear(in_features=50, out_features=2, bias=True)
)
5. 訓(xùn)練函數(shù)
# 訓(xùn)練循環(huán)
def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset) # 訓(xùn)練集的大小num_batches = len(dataloader) # 批次數(shù)目, (size/batch_size,向上取整)train_loss, train_acc = 0, 0 # 初始化訓(xùn)練損失和正確率for X, y in dataloader: # 獲取圖片及其標(biāo)簽X, y = X.to(device), y.to(device)# 計(jì)算預(yù)測(cè)誤差pred = model(X) # 網(wǎng)絡(luò)輸出loss = loss_fn(pred, y) # 計(jì)算網(wǎng)絡(luò)輸出和真實(shí)值之間的差距,targets為真實(shí)值,計(jì)算二者差值即為損失# 反向傳播optimizer.zero_grad() # grad屬性歸零loss.backward() # 反向傳播optimizer.step() # 每一步自動(dòng)更新# 記錄acc與losstrain_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_loss += loss.item()train_acc /= sizetrain_loss /= num_batchesreturn train_acc, train_loss
6. 測(cè)試函數(shù)
def test (dataloader, model, loss_fn):size = len(dataloader.dataset) # 測(cè)試集的大小num_batches = len(dataloader) # 批次數(shù)目, (size/batch_size,向上取整)test_loss, test_acc = 0, 0# 當(dāng)不進(jìn)行訓(xùn)練時(shí),停止梯度更新,節(jié)省計(jì)算內(nèi)存消耗with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)# 計(jì)算losstarget_pred = model(X)loss = loss_fn(target_pred, y)test_loss += loss.item()test_acc += (target_pred.argmax(1) == y).type(torch.float).sum().item()test_acc /= sizetest_loss /= num_batchesreturn test_acc, test_loss
import copytrain_loss = []
train_acc = []
test_loss = []
test_acc = []best_acc = 0.0for epoch in range(epochs):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)# 更新學(xué)習(xí)率scheduler.step() # 更新學(xué)習(xí)率——調(diào)用官方動(dòng)態(tài)學(xué)習(xí)率時(shí)使用model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)# 保存最佳模型if epoch_test_acc > best_acc:best_acc = epoch_test_accbest_model = copy.deepcopy(model)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 獲取當(dāng)前的學(xué)習(xí)率lr = optimizer.state_dict()['param_groups'][0]['lr']template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss, lr))print('Done. Best test acc: ', best_acc)
Epoch: 1, Train_acc:71.8%, Train_loss:0.544, Test_acc:79.1%, Test_loss:0.431, Lr:1.00E-03
Epoch: 2, Train_acc:82.8%, Train_loss:0.397, Test_acc:80.0%, Test_loss:0.392, Lr:9.20E-04
Epoch: 3, Train_acc:84.6%, Train_loss:0.362, Test_acc:77.7%, Test_loss:0.400, Lr:9.20E-04
Epoch: 4, Train_acc:85.8%, Train_loss:0.348, Test_acc:77.7%, Test_loss:0.409, Lr:8.46E-04
Epoch: 5, Train_acc:86.3%, Train_loss:0.335, Test_acc:79.5%, Test_loss:0.420, Lr:8.46E-04
Epoch: 6, Train_acc:86.9%, Train_loss:0.324, Test_acc:80.9%, Test_loss:0.431, Lr:7.79E-04
Epoch: 7, Train_acc:87.8%, Train_loss:0.310, Test_acc:80.0%, Test_loss:0.464, Lr:7.79E-04
Epoch: 8, Train_acc:87.9%, Train_loss:0.301, Test_acc:78.1%, Test_loss:0.494, Lr:7.16E-04
Epoch: 9, Train_acc:88.9%, Train_loss:0.286, Test_acc:76.3%, Test_loss:0.536, Lr:7.16E-04
Epoch:10, Train_acc:88.8%, Train_loss:0.283, Test_acc:78.6%, Test_loss:0.536, Lr:6.59E-04
Epoch:11, Train_acc:90.3%, Train_loss:0.268, Test_acc:78.6%, Test_loss:0.541, Lr:6.59E-04
Epoch:12, Train_acc:90.8%, Train_loss:0.256, Test_acc:78.1%, Test_loss:0.566, Lr:6.06E-04
Epoch:13, Train_acc:91.5%, Train_loss:0.238, Test_acc:77.7%, Test_loss:0.622, Lr:6.06E-04
Epoch:14, Train_acc:92.1%, Train_loss:0.227, Test_acc:78.6%, Test_loss:0.641, Lr:5.58E-04
Epoch:15, Train_acc:92.0%, Train_loss:0.220, Test_acc:78.1%, Test_loss:0.659, Lr:5.58E-04
Epoch:16, Train_acc:92.4%, Train_loss:0.204, Test_acc:75.3%, Test_loss:0.716, Lr:5.13E-04
Epoch:17, Train_acc:93.4%, Train_loss:0.181, Test_acc:74.0%, Test_loss:0.815, Lr:5.13E-04
Epoch:18, Train_acc:94.2%, Train_loss:0.165, Test_acc:73.5%, Test_loss:0.891, Lr:4.72E-04
Epoch:19, Train_acc:94.6%, Train_loss:0.145, Test_acc:70.7%, Test_loss:0.975, Lr:4.72E-04
Epoch:20, Train_acc:95.2%, Train_loss:0.147, Test_acc:71.6%, Test_loss:1.088, Lr:4.34E-04
Epoch:21, Train_acc:96.1%, Train_loss:0.121, Test_acc:74.4%, Test_loss:1.075, Lr:4.34E-04
Epoch:22, Train_acc:97.1%, Train_loss:0.101, Test_acc:71.2%, Test_loss:1.114, Lr:4.00E-04
Epoch:23, Train_acc:97.3%, Train_loss:0.087, Test_acc:72.6%, Test_loss:1.214, Lr:4.00E-04
Epoch:24, Train_acc:98.0%, Train_loss:0.070, Test_acc:73.0%, Test_loss:1.246, Lr:3.68E-04
Epoch:25, Train_acc:98.9%, Train_loss:0.056, Test_acc:73.0%, Test_loss:1.352, Lr:3.68E-04
Epoch:26, Train_acc:99.0%, Train_loss:0.047, Test_acc:73.5%, Test_loss:1.370, Lr:3.38E-04
Epoch:27, Train_acc:99.3%, Train_loss:0.037, Test_acc:71.6%, Test_loss:1.401, Lr:3.38E-04
Epoch:28, Train_acc:99.5%, Train_loss:0.032, Test_acc:73.0%, Test_loss:1.535, Lr:3.11E-04
Epoch:29, Train_acc:99.6%, Train_loss:0.025, Test_acc:73.0%, Test_loss:1.533, Lr:3.11E-04
Epoch:30, Train_acc:99.7%, Train_loss:0.021, Test_acc:72.6%, Test_loss:1.596, Lr:2.86E-04
Epoch:31, Train_acc:99.7%, Train_loss:0.016, Test_acc:70.7%, Test_loss:1.698, Lr:2.86E-04
Epoch:32, Train_acc:99.8%, Train_loss:0.013, Test_acc:70.7%, Test_loss:1.744, Lr:2.63E-04
Epoch:33, Train_acc:99.8%, Train_loss:0.012, Test_acc:70.2%, Test_loss:1.792, Lr:2.63E-04
Epoch:34, Train_acc:99.8%, Train_loss:0.011, Test_acc:70.7%, Test_loss:1.836, Lr:2.42E-04
Epoch:35, Train_acc:99.8%, Train_loss:0.010, Test_acc:69.8%, Test_loss:1.875, Lr:2.42E-04
Epoch:36, Train_acc:99.8%, Train_loss:0.009, Test_acc:69.8%, Test_loss:1.914, Lr:2.23E-04
Epoch:37, Train_acc:99.8%, Train_loss:0.009, Test_acc:68.8%, Test_loss:1.949, Lr:2.23E-04
Epoch:38, Train_acc:99.8%, Train_loss:0.008, Test_acc:68.8%, Test_loss:1.985, Lr:2.05E-04
Epoch:39, Train_acc:99.8%, Train_loss:0.008, Test_acc:68.4%, Test_loss:2.017, Lr:2.05E-04
Epoch:40, Train_acc:99.8%, Train_loss:0.007, Test_acc:68.4%, Test_loss:2.050, Lr:1.89E-04
Epoch:41, Train_acc:99.8%, Train_loss:0.007, Test_acc:68.4%, Test_loss:2.080, Lr:1.89E-04
Epoch:42, Train_acc:99.8%, Train_loss:0.006, Test_acc:68.4%, Test_loss:2.111, Lr:1.74E-04
Epoch:43, Train_acc:99.8%, Train_loss:0.006, Test_acc:69.3%, Test_loss:2.139, Lr:1.74E-04
Epoch:44, Train_acc:99.8%, Train_loss:0.005, Test_acc:69.3%, Test_loss:2.168, Lr:1.60E-04
Epoch:45, Train_acc:99.8%, Train_loss:0.005, Test_acc:69.3%, Test_loss:2.194, Lr:1.60E-04
Epoch:46, Train_acc:99.8%, Train_loss:0.005, Test_acc:68.8%, Test_loss:2.221, Lr:1.47E-04
Epoch:47, Train_acc:99.8%, Train_loss:0.005, Test_acc:68.8%, Test_loss:2.245, Lr:1.47E-04
Epoch:48, Train_acc:99.8%, Train_loss:0.004, Test_acc:68.8%, Test_loss:2.270, Lr:1.35E-04
Epoch:49, Train_acc:99.9%, Train_loss:0.004, Test_acc:68.8%, Test_loss:2.293, Lr:1.35E-04
Epoch:50, Train_acc:99.9%, Train_loss:0.004, Test_acc:68.8%, Test_loss:2.316, Lr:1.24E-04
Done. Best test acc: 0.8093023255813954
7. 模型評(píng)估
import matplotlib.pyplot as plt
#隱藏警告
import warnings
warnings.filterwarnings("ignore") #忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用來正常顯示中文標(biāo)簽
plt.rcParams['axes.unicode_minus'] = False # 用來正常顯示負(fù)號(hào)
plt.rcParams['figure.dpi'] = 100 #分辨率epochs_range = range(epochs)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
?
?
# 混淆矩陣
print("==============輸入數(shù)據(jù)Shape為==============")
print("X_test.shape:",X_test.shape)
print("y_test.shape:",y_test.shape)pred = model(X_test.to(device)).argmax(1).cpu().numpy()print("\n==============輸出數(shù)據(jù)Shape為==============")
print("pred.shape:",pred.shape)
==============輸入數(shù)據(jù)Shape為==============
X_test.shape: torch.Size([215, 32])
y_test.shape: torch.Size([215])==============輸出數(shù)據(jù)Shape為==============
pred.shape: (215,)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay# 計(jì)算混淆矩陣
cm = confusion_matrix(y_test, pred)plt.figure(figsize=(6,5))
plt.suptitle('')
sns.heatmap(cm, annot=True, fmt="d", cmap="Blues")# 修改字體大小
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title("Confusion Matrix", fontsize=12)
plt.xlabel("Predicted Label", fontsize=10)
plt.ylabel("True Label", fontsize=10)# 顯示圖
plt.tight_layout() # 調(diào)整布局防止重疊
plt.show()
?
?
8. 模型保存及加載
# 自定義模型保存
# 狀態(tài)字典保存
torch.save(model.state_dict(),'./模型參數(shù)/R7_rnn_model_state_dict.pth') # 僅保存狀態(tài)字典# 定義模型用來加載參數(shù)
best_model = model_rnn().to(device)best_model.load_state_dict(torch.load('./模型參數(shù)/R7_rnn_model_state_dict.pth')) # 加載狀態(tài)字典到模型
<All keys matched successfully>
9. 使用訓(xùn)練好的模型進(jìn)行預(yù)測(cè)
test_X = X_test[0].reshape(1, -1) # X_test[0]即我們的輸入數(shù)據(jù)pred = best_model(test_X.to(device)).argmax(1).item()
print("模型預(yù)測(cè)結(jié)果為:",pred)
print("=="*20)
print("0:未患病")
print("1:已患病")
模型預(yù)測(cè)結(jié)果為: 0
========================================
0:未患病
1:已患病