中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁(yè) > news >正文

2023年電腦端網(wǎng)游濟(jì)南百度推廣優(yōu)化

2023年電腦端網(wǎng)游,濟(jì)南百度推廣優(yōu)化,湖南網(wǎng)站建站系統(tǒng)哪家好,揭陽(yáng)做網(wǎng)站十、CNN 卷積神經(jīng)網(wǎng)絡(luò) 基礎(chǔ)篇 首先引入 —— 二維卷積:卷積層保留原空間信息關(guān)鍵:判斷輸入輸出的維度大小特征提取:卷積層、下采樣分類器:全連接 引例:RGB圖像(柵格圖像) 首先,老師…

?十、CNN 卷積神經(jīng)網(wǎng)絡(luò) 基礎(chǔ)篇


首先引入 ——

  • 二維卷積:卷積層保留原空間信息
  • 關(guān)鍵:判斷輸入輸出的維度大小
  • 特征提取:卷積層、下采樣
  • 分類器:全連接

????????

????????


引例:RGB圖像(柵格圖像)

  • 首先,老師介紹了CCD相機(jī)模型,這是一種通過(guò)光敏電阻,利用光強(qiáng)對(duì)電阻的阻值影響,對(duì)應(yīng)地影響色彩亮度實(shí)現(xiàn)不同亮度等級(jí)像素采集的原件。三色圖像是采用不同敏感度的光敏電阻實(shí)現(xiàn)的。
  • 還介紹了矢量圖像(也就是PPT里通過(guò)圓心、邊、填充信息描述而來(lái)的圖像,而非采集的圖像)
  • 紅綠藍(lán) Channel
  • 拿出一個(gè)圖像塊做卷積,通道高度寬度都可能會(huì)改變,將整個(gè)圖像遍歷,每個(gè)塊分別做卷積

????????


引例:

深度學(xué)習(xí) | CNN卷積核與通道-CSDN博客


?實(shí)現(xiàn):A Simple Convolutional Neural Network

?????????

  • 池化層一個(gè)就行,因?yàn)樗麤](méi)有權(quán)重,但是有權(quán)重的,必須每一層做一個(gè)實(shí)例
  • relu 非線性激活
  • 交叉熵?fù)p失 最后一層不做激活!

????????????????

代碼實(shí)現(xiàn):

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim# 1、數(shù)據(jù)準(zhǔn)備
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,),(0.3081,))
])train_dataset = datasets.MNIST(root='../dataset/mnist',train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,shuffle=True,batch_size=batch_size)test_dataset = datasets.MNIST(root='../dataset/mnist',train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,shuffle=False,batch_size=batch_size)# 2、構(gòu)建模型
class Net(torch.nn.Module):def __init__(self):super(Net,self).__init__()self.conv1 = torch.nn.Conv2d(1,10,kernel_size=5)self.conv2 = torch.nn.Conv2d(10,20,kernel_size=5)self.pooling = torch.nn.MaxPool2d(2)self.fc = torch.nn.Linear(320,10)def forward(self,x):# Flatten data from (n,1,28,28) to (n,784)batch_size = x.size(0)x = self.pooling(F.relu(self.conv1(x)))x = self.pooling(F.relu(self.conv2(x)))x = x.view(batch_size,-1) #flattenx = self.fc(x)return xmodel = Net()# 3、損失函數(shù)和優(yōu)化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(),lr=0.01,momentum=0.5)# 4、訓(xùn)練和測(cè)試
def train(epoch):running_loss = 0.0for batch_idx,data in enumerate(train_loader,0):inputs,target = dataoptimizer.zero_grad()outputs = model(inputs)loss = criterion(outputs,target)loss.backward()optimizer.step()running_loss += loss.item()if batch_idx % 300 == 299: # 每三百次迭代輸出一次print('[%d , %5d] loss: %.3f' % (epoch + 1 ,batch_idx + 1,running_loss / 300))running_loss = 0.0def test():correct = 0total = 0with torch.no_grad():for data in test_loader:images,labels = dataoutputs = model(images) # 輸出為一個(gè)矩陣,下面要求每一行最大值(即分類)的下標(biāo)_,predicted = torch.max(outputs.data,dim=1)total += labels.size(0)correct += (predicted == labels).sum().item()print('Accuracy on test set: %d %%' % (100 * correct / total))if __name__ == '__main__':for epoch in range(10):train(epoch)test()

實(shí)驗(yàn)結(jié)果:

????????



十一、CNN 卷積神經(jīng)網(wǎng)絡(luò) 高級(jí)篇

基礎(chǔ)篇中設(shè)計(jì)的模型類似于LeNet5

????????


再來(lái)看一些更為復(fù)雜的結(jié)構(gòu):

11.1、GoogLeNet

GoogLeNet是一種串行結(jié)構(gòu)的復(fù)雜網(wǎng)絡(luò);想要實(shí)現(xiàn)復(fù)雜網(wǎng)絡(luò),并且較少代碼冗余和多次重寫相同功能的程序,面向過(guò)程的語(yǔ)言使用函數(shù),面向?qū)ο蟮恼Z(yǔ)言python使用

而在CNN當(dāng)中,使用Moduleblock這種模塊將具有復(fù)用價(jià)值的代碼塊封裝成一塊積木,供拼接使用;

GoogLeNet為自己框架里被重復(fù)使用的Module命名為Inception,這也電影盜夢(mèng)空間的英文名,意為:夢(mèng)中夢(mèng)、嵌套;

????????


Inception Module的構(gòu)造方式之一:

?????????

  • 為什么要做成這個(gè)樣子?
    • —— 在構(gòu)建神經(jīng)網(wǎng)絡(luò)時(shí),一些超參數(shù)我們是難以確定的,比如卷積核的大小,所以你不知道哪個(gè)好用,那我就都用一下,將來(lái)通過(guò)訓(xùn)練,找到最優(yōu)的卷積組合。
    • GoogLeNet的設(shè)計(jì)思路是:我把各種形式的都寫進(jìn)我的Block當(dāng)中,至于每一個(gè)支路的權(quán)重,讓網(wǎng)絡(luò)訓(xùn)練的時(shí)候自己去搭配;
  • Concatenate:將四條通道算出來(lái)的張量拼接到一起
  • GoogLeNet設(shè)計(jì)了四條通路支線,并要求他們保證圖像的寬和高W、H必須相同,只有通道數(shù)C可以不相同,因?yàn)?span style="background-color:#fefcd8;">各支線進(jìn)行過(guò)卷積和池化等操作后,要將WH構(gòu)成的面為粘合面,按照C的方向,拼接concatenate起來(lái);
  • Average Pooling:均值池化
  • 1x1的卷積可以將信息融合:也叫network in network(網(wǎng)絡(luò)里的網(wǎng)絡(luò))
    • 1 * 1的卷積核:以往我只是表面上覺(jué)得,單位像素大小的卷積核,他的意義不過(guò)是調(diào)整輸入和輸出的通道數(shù)之間的關(guān)系;劉老師舉了個(gè)例子,讓我對(duì)這個(gè)卷積核有了新的認(rèn)識(shí):
    • 就是加速運(yùn)算,他的作用的確是加速運(yùn)算,不過(guò)其中的原理是:通過(guò)1*1的核處理過(guò)的圖像,可以減少后續(xù)卷積層的輸入通道數(shù);

Inception塊 代碼實(shí)現(xiàn):

????????

然后再沿著通道將他們拼接在一起:

????????

將四個(gè)分支可以放到一個(gè)列表里,然后用torch提供的函數(shù)cat沿著dim=1的維度將他們拼接起來(lái)

因?yàn)槲覀兊木S度是 batch,channel,width,height ,所以是第一個(gè)維度dim=1,索引從零開始,C的位置是1

?????????


MNIST數(shù)據(jù)集 代碼實(shí)現(xiàn):

初始的輸入通道并沒(méi)有寫死,而是作為構(gòu)造函數(shù)里的參數(shù),這是因?yàn)槲覀儗?lái)實(shí)例化時(shí)可以指明輸入通道是多少。

先是1個(gè)卷積層(conv,maxpooling,relu),然后inceptionA模塊(輸出的channels是24+16+24+24=88),接下來(lái)又是一個(gè)卷積層(conv,mp,relu),然后inceptionA模塊,最后一個(gè)全連接層(fc)。

1408這個(gè)數(shù)據(jù)可以通過(guò)x = x.view(in_size, -1)后調(diào)用x.shape得到。


也可通過(guò)查看網(wǎng)絡(luò)結(jié)構(gòu):

最后一層線性層的輸入尺寸(input size)1408是根據(jù)倒數(shù)第二個(gè)InceptionA模塊的輸出形狀推導(dǎo)出來(lái)的。在該模塊中,輸入形狀為[-1, 88, 4, 4],其中-1表示批量大小(Batch Size)。因此,通過(guò)展平這個(gè)特征圖(Flatten),我們可以將其轉(zhuǎn)換為一維向量,即 [-1, 88 * 4 * 4] = [-1, 1408]。

所以,線性層的輸入尺寸為1408,它接收展平后的特征向量作為輸入,并將其映射到10個(gè)輸出類別的向量。

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optimfrom torchvision import models
from torchsummary import summary# 1、數(shù)據(jù)準(zhǔn)備
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,),(0.3081,))
])train_dataset = datasets.MNIST(root='../dataset/mnist',train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,shuffle=True,batch_size=batch_size)test_dataset = datasets.MNIST(root='../dataset/mnist',train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,shuffle=False,batch_size=batch_size)# 2、構(gòu)建模型
class InceptionA(torch.nn.Module):def __init__(self,in_channels):super(InceptionA,self).__init__()self.branch1x1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch5x5_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch5x5_2 = torch.nn.Conv2d(16, 24, kernel_size=5,padding=2)self.branch3x3_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch3x3_2 = torch.nn.Conv2d(16, 24,kernel_size=3,padding=1)self.branch3x3_3 = torch.nn.Conv2d(24, 24, kernel_size=3, padding=1)self.branch_pool = torch.nn.Conv2d(in_channels,24,kernel_size=1)def forward(self,x):branch1x1 = self.branch1x1(x)branch5x5 = self.branch5x5_1(x)branch5x5 = self.branch5x5_2(branch5x5)branch3x3 = self.branch3x3_1(x)branch3x3 = self.branch3x3_2(branch3x3)branch3x3 = self.branch3x3_3(branch3x3)branch_pool = F.avg_pool2d(x,kernel_size=3,stride=1,padding=1)branch_pool = self.branch_pool(branch_pool)outputs = [branch1x1,branch5x5,branch3x3,branch_pool]return torch.cat(outputs,dim=1)class Net(torch.nn.Module):def __init__(self):super(Net,self).__init__()self.conv1 = torch.nn.Conv2d(1,10,kernel_size=5)self.conv2 = torch.nn.Conv2d(88,20,kernel_size=5)self.incep1 = InceptionA(in_channels=10)self.incep2 = InceptionA(in_channels=20)self.mp = torch.nn.MaxPool2d(2)self.fc = torch.nn.Linear(1408,10)def forward(self,x):in_size = x.size(0)x = F.relu(self.mp(self.conv1(x)))x = self.incep1(x)x = F.relu(self.mp(self.conv2(x)))x = self.incep2(x)x = x.view(in_size,-1)x = self.fc(x)return xmodel = Net()
#summary(model,(1,28,28),device='cpu')# 3、損失函數(shù)和優(yōu)化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(),lr=0.01,momentum=0.5)# 4、訓(xùn)練和測(cè)試
def train(epoch):running_loss = 0.0for batch_idx,data in enumerate(train_loader,0):inputs,target = dataoptimizer.zero_grad()outputs = model(inputs)loss = criterion(outputs,target)loss.backward()optimizer.step()running_loss += loss.item()if batch_idx % 300 == 299: # 每三百次迭代輸出一次print('[%d , %5d] loss: %.3f' % (epoch + 1 ,batch_idx + 1,running_loss / 300))running_loss = 0.0def test():correct = 0total = 0with torch.no_grad():for data in test_loader:images,labels = dataoutputs = model(images) # 輸出為一個(gè)矩陣,下面要求每一行最大值(即分類)的下標(biāo)_,predicted = torch.max(outputs.data,dim=1)total += labels.size(0)correct += (predicted == labels).sum().item()print('Accuracy on test set: %d %%' % (100 * correct / total))if __name__ == '__main__':for epoch in range(10):train(epoch)test()

??????? Layer (type)?????????????? Output Shape???????? Param #
================================================================
??????????? Conv2d-1?????????? [-1, 10, 24, 24]???????????? 260
???????? MaxPool2d-2?????????? [-1, 10, 12, 12]?????????????? 0
??????????? Conv2d-3?????????? [-1, 16, 12, 12]???????????? 176
??????????? Conv2d-4?????????? [-1, 16, 12, 12]???????????? 176
??????????? Conv2d-5?????????? [-1, 24, 12, 12]?????????? 9,624
??????????? Conv2d-6?????????? [-1, 16, 12, 12]???????????? 176
??????????? Conv2d-7?????????? [-1, 24, 12, 12]?????????? 3,480
??????????? Conv2d-8?????????? [-1, 24, 12, 12]?????????? 5,208
??????????? Conv2d-9?????????? [-1, 24, 12, 12]???????????? 264
?????? InceptionA-10?????????? [-1, 88, 12, 12]?????????????? 0
?????????? Conv2d-11???????????? [-1, 20, 8, 8]????????? 44,020
??????? MaxPool2d-12???????????? [-1, 20, 4, 4]?????????????? 0
?????????? Conv2d-13???????????? [-1, 16, 4, 4]???????????? 336
?????????? Conv2d-14???????????? [-1, 16, 4, 4]???????????? 336
?????????? Conv2d-15???????????? [-1, 24, 4, 4]?????????? 9,624
?????????? Conv2d-16???????????? [-1, 16, 4, 4]???????????? 336
?????????? Conv2d-17???????????? [-1, 24, 4, 4]?????????? 3,480
?????????? Conv2d-18???????????? [-1, 24, 4, 4]?????????? 5,208
?????????? Conv2d-19???????????? [-1, 24, 4, 4]???????????? 504
?????? InceptionA-20???????????? [-1, 88, 4, 4]?????????????? 0
?????????? Linear-21?????????????????? [-1, 10]????????? 14,090?

?????????

?????????


11.2、ResNet

GoogLeNet最后留下了一個(gè)問(wèn)題:通過(guò)測(cè)試,網(wǎng)絡(luò)的層數(shù)會(huì)影響模型的精度,但當(dāng)時(shí)沒(méi)有意識(shí)到梯度消失的問(wèn)題,

所以GoogLeNet認(rèn)為We Need To Go Deeper;

直到何凱明大神的ResNet的出現(xiàn),提出了層數(shù)越多,模型效果不一定越好的問(wèn)題,

并針對(duì)這個(gè)問(wèn)題提出了解決方案ResNet網(wǎng)絡(luò)結(jié)構(gòu)。

????????

Residual Net提出了這樣一種塊:跳連接

????????????????

以往的網(wǎng)絡(luò)模型是這種Plain Net形式:

輸入數(shù)據(jù)x,經(jīng)過(guò)Weight Layer(可以是卷積層,也可以是池化或者線性層),再通過(guò)激活函數(shù)加入非線性影響因素,最后輸出結(jié)果H(x);

這種方式使得H(x)對(duì)x的偏導(dǎo)數(shù)的值分布在(0,1)之間,這在反向傳播、復(fù)合函數(shù)的偏導(dǎo)數(shù)逐步累乘的過(guò)程中,必然會(huì)導(dǎo)致?lián)p失函數(shù)L對(duì)x的偏導(dǎo)數(shù)的值,趨近于0,而且,網(wǎng)絡(luò)層數(shù)越深,這種現(xiàn)象就會(huì)越明顯,最終導(dǎo)致最開始的(也就是靠近輸入的)層沒(méi)有獲得有效的權(quán)重更新,甚至模型失效;

即梯度消失:假如每一處的梯度都小于1,由于我們使用的是反向傳播,當(dāng)梯度趨近于0時(shí),那么權(quán)重得不到更新:w=w- \sigma g,也就是說(shuō)離輸入近的塊沒(méi)辦法得到充分的訓(xùn)練。

解決方法:逐層訓(xùn)練,但層數(shù)過(guò)多會(huì)很難

ResNet采用了一個(gè)非常巧妙的方式解決了H(x)對(duì)x的偏導(dǎo)數(shù)的值分布在(0,1)之間這個(gè)問(wèn)題:

在以往的框架中,加入一個(gè)跳躍,再原有的網(wǎng)絡(luò)輸出F(x)的基礎(chǔ)上,將輸入x累加到上面,這樣一來(lái),在最終輸出H(x)對(duì)輸入數(shù)據(jù)x求偏導(dǎo)數(shù)的時(shí)候,這個(gè)結(jié)果就會(huì)分布在(1,2)之間,這樣就不怕網(wǎng)絡(luò)在更新權(quán)重梯度累乘的過(guò)程中,出現(xiàn)乘積越來(lái)越趨于0而導(dǎo)致的梯度消失問(wèn)題;

與GoogLeNet類似,ResNet的Residual Block在搭建時(shí),留了一個(gè)傳入?yún)?shù)的機(jī)會(huì),這個(gè)參數(shù)留給了通道數(shù)channel,Residual Block的要求是輸入與輸出的C,W,H分別對(duì)應(yīng)相同,B是一定要相同的,所以就是說(shuō),經(jīng)過(guò)殘差模塊Residual Block處理過(guò)的圖像,并不改變?cè)械某叽绾屯ǖ罃?shù);(TBD)
?

但是注意,因?yàn)橐蛒做加法,所以圖中的兩層輸出和輸入x 張量維度必須完全一樣,即通道高度寬度都要一樣

若輸出和輸入的維度不一樣,也可以做跳連接,可以將x過(guò)一個(gè)最大池化層轉(zhuǎn)換成同樣的大小,如下圖

?????????


利用殘差結(jié)構(gòu)塊的網(wǎng)絡(luò):

?????????

先來(lái)看一下residual block的代碼實(shí)現(xiàn):

為了保持輸入輸出的大小不變,所以要將padding設(shè)置為1,輸入通道和輸出通道都和x保持一致

注意第二個(gè)卷積之后,先做求和再激活

????????


????????

MNIST數(shù)據(jù)集 代碼實(shí)現(xiàn):

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optimfrom torchvision import models
from torchsummary import summary
from torchviz import make_dot# 1、數(shù)據(jù)準(zhǔn)備
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,),(0.3081,))
])train_dataset = datasets.MNIST(root='../dataset/mnist',train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,shuffle=True,batch_size=batch_size)test_dataset = datasets.MNIST(root='../dataset/mnist',train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,shuffle=False,batch_size=batch_size)# 2、構(gòu)建模型
class InceptionA(torch.nn.Module):def __init__(self,in_channels):super(InceptionA,self).__init__()self.branch1x1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch5x5_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch5x5_2 = torch.nn.Conv2d(16, 24, kernel_size=5,padding=2)self.branch3x3_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)self.branch3x3_2 = torch.nn.Conv2d(16, 24,kernel_size=3,padding=1)self.branch3x3_3 = torch.nn.Conv2d(24, 24, kernel_size=3, padding=1)self.branch_pool = torch.nn.Conv2d(in_channels,24,kernel_size=1)def forward(self,x):branch1x1 = self.branch1x1(x)branch5x5 = self.branch5x5_1(x)branch5x5 = self.branch5x5_2(branch5x5)branch3x3 = self.branch3x3_1(x)branch3x3 = self.branch3x3_2(branch3x3)branch3x3 = self.branch3x3_3(branch3x3)branch_pool = F.avg_pool2d(x,kernel_size=3,stride=1,padding=1)branch_pool = self.branch_pool(branch_pool)outputs = [branch1x1,branch5x5,branch3x3,branch_pool]return torch.cat(outputs,dim=1)class ResidualBlock(torch.nn.Module):def __init__(self,channels):super(ResidualBlock,self).__init__()self.channels = channelsself.conv1 = torch.nn.Conv2d(channels,channels,kernel_size=3,padding=1)self.conv2 = torch.nn.Conv2d(channels, channels, kernel_size=3, padding=1)def forward(self,x):y = F.relu(self.conv1(x))y = self.conv2(y)return F.relu(x+y)class Net(torch.nn.Module):def __init__(self):super(Net,self).__init__()self.conv1 = torch.nn.Conv2d(1,16,kernel_size=5)self.conv2 = torch.nn.Conv2d(16,32,kernel_size=5)self.rblock1 = ResidualBlock(16)self.rblock2 = ResidualBlock(32)self.mp = torch.nn.MaxPool2d(2)self.fc = torch.nn.Linear(512,10)def forward(self,x):in_size = x.size(0)x = self.mp(F.relu(self.conv1(x)))x = self.rblock1(x)x = self.mp(F.relu(self.conv2(x)))x = self.rblock2(x)x = x.view(in_size,-1)x = self.fc(x)return xmodel = Net()
#x = torch.randn(1,1,28,28)
#y = model(x)
#vise=make_dot(y, params=dict(model.named_parameters()))
#vise.view()
#print(model)
#summary(model,(1,28,28),device='cpu')# 3、損失函數(shù)和優(yōu)化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(),lr=0.01,momentum=0.5)# 4、訓(xùn)練和測(cè)試
def train(epoch):running_loss = 0.0for batch_idx,data in enumerate(train_loader,0):inputs,target = dataoptimizer.zero_grad()outputs = model(inputs)loss = criterion(outputs,target)loss.backward()optimizer.step()running_loss += loss.item()if batch_idx % 300 == 299: # 每三百次迭代輸出一次print('[%d , %5d] loss: %.3f' % (epoch + 1 ,batch_idx + 1,running_loss / 300))running_loss = 0.0def test():correct = 0total = 0with torch.no_grad():for data in test_loader:images,labels = dataoutputs = model(images) # 輸出為一個(gè)矩陣,下面要求每一行最大值(即分類)的下標(biāo)_,predicted = torch.max(outputs.data,dim=1)total += labels.size(0)correct += (predicted == labels).sum().item()print('Accuracy on test set: %d %%' % (100 * correct / total))if __name__ == '__main__':for epoch in range(10):train(epoch)test()

?實(shí)驗(yàn)結(jié)果:

?????????


課程最后劉老師推薦了兩篇論文:

??? Identity Mappings in Deep Residual Networks:

??? He K, Zhang X, Ren S, et al. Identity Mappings in Deep Residual Networks[C]

????????其中給出了很多不同種類的Residual Block變化的構(gòu)造形式;

?????????


??? Densely Connected Convolutional Networks:

??? Huang G, Liu Z, Laurens V D M, et al. Densely Connected Convolutional Networks[J]. 2016:2261-2269.?

?????????

????????大名鼎鼎的DenseNet,這個(gè)網(wǎng)絡(luò)結(jié)構(gòu)基于ResNet跳躍傳遞的思想,實(shí)現(xiàn)了多次跳躍的網(wǎng)絡(luò)結(jié)構(gòu),以后很多通過(guò)神經(jīng)網(wǎng)絡(luò)提取多尺度、多層級(jí)的特征,都在利用這種方式,通過(guò)Encoder對(duì)不同層級(jí)的語(yǔ)義特征進(jìn)行逐步提取,在穿插著傳遞到Decoder過(guò)程中不同的層級(jí)上去,旨在融合不同層級(jí)的特征,盡可能地挖掘圖像全部的特征;


學(xué)習(xí)方法

????????

全文資料及部分文字來(lái)源于 ——?

【Pytorch深度學(xué)習(xí)實(shí)踐】B站up劉二大人之BasicCNN & Advanced CNN -代碼理解與實(shí)現(xiàn)(9/9)_b站講神經(jīng)網(wǎng)絡(luò)的up土堆-CSDN博客

11.卷積神經(jīng)網(wǎng)絡(luò)(高級(jí)篇)_嗶哩嗶哩_bilibili


http://www.risenshineclean.com/news/32703.html

相關(guān)文章:

  • 西安哪里做網(wǎng)站最大鄭州seo顧問(wèn)
  • 國(guó)內(nèi)做批發(fā)的網(wǎng)站百度醫(yī)生在線問(wèn)診
  • 網(wǎng)站集約化建設(shè)紀(jì)要網(wǎng)站案例分析
  • wordpress關(guān)注微信登陸廈門seo總部電話
  • 開發(fā)網(wǎng)站的步驟百度電腦版下載官網(wǎng)
  • 柳州做網(wǎng)站有kv網(wǎng)絡(luò)營(yíng)銷八大職能
  • 網(wǎng)站不備案可以做淘寶客嗎營(yíng)銷推廣公司案例
  • 網(wǎng)站設(shè)計(jì)精美案例電腦培訓(xùn)學(xué)校哪家好
  • 修改wordpress版權(quán)搜索引擎優(yōu)化是指
  • 建企業(yè)網(wǎng)站哪家好百度云搜索引擎入口
  • 邢臺(tái)企業(yè)做網(wǎng)站搜索關(guān)鍵詞的網(wǎng)站
  • 網(wǎng)站建設(shè)困難嗎企業(yè)宣傳推廣方案
  • wordpress 滑塊驗(yàn)證碼搜索引擎優(yōu)化教材答案
  • wordpress在服務(wù)器上安裝插件上海谷歌seo推廣公司
  • 企業(yè)營(yíng)銷型網(wǎng)站建設(shè)優(yōu)惠成人教育培訓(xùn)機(jī)構(gòu)
  • html5手機(jī)網(wǎng)站發(fā)布阿里云注冊(cè)域名
  • 運(yùn)濤網(wǎng)站建設(shè)南昌網(wǎng)站seo外包服務(wù)
  • 在什么網(wǎng)站做推廣最好鞍山seo公司
  • 哪個(gè)網(wǎng)站域名便宜seo報(bào)名在線咨詢
  • wordpress調(diào)用指定文章內(nèi)容seo優(yōu)化網(wǎng)站推廣全域營(yíng)銷獲客公司
  • 網(wǎng)站建設(shè)書本信息網(wǎng)站搜索引擎優(yōu)化的步驟
  • 北京網(wǎng)站設(shè)計(jì)方案優(yōu)化品牌seo關(guān)鍵詞
  • 服裝小訂單接單平臺(tái)seo網(wǎng)站優(yōu)化推廣費(fèi)用
  • 網(wǎng)站游戲制作開發(fā)神秘網(wǎng)站
  • 建什么網(wǎng)站能百度收錄國(guó)際國(guó)內(nèi)新聞最新消息今天
  • 免費(fèi)php企業(yè)網(wǎng)站源碼關(guān)鍵詞優(yōu)化怎么做
  • 理財(cái)平臺(tái)網(wǎng)站建設(shè)交換鏈接營(yíng)銷成功案例
  • 互聯(lián)網(wǎng)行業(yè)信息網(wǎng)站免費(fèi)b2b網(wǎng)站推廣渠道
  • wordpress圖片燈箱效果修改百度seo營(yíng)銷推廣
  • 廣州做網(wǎng)站厲害的公司互聯(lián)網(wǎng)營(yíng)銷師證書騙局