宜和購物電視購物官方網(wǎng)站??诰W(wǎng)站關(guān)鍵詞優(yōu)化
UNet網(wǎng)絡(luò)制作
代碼參考UNet數(shù)據(jù)集制作及代碼實現(xiàn)_嗶哩嗶哩_bilibili,根據(jù)該UP主的代碼,加上我的個人整理和理解。(這個UP主的代碼感覺很好,很規(guī)范
UNet網(wǎng)絡(luò)由三部分組成:卷積塊,下采樣層,上采樣層。
卷積塊
UNet網(wǎng)絡(luò)中卷積塊進(jìn)行了兩次卷積。
class Conv_Block(nn.Module):def __init__(self, in_channel, out_channel):super(Conv_Block, self).__init__()self.layer = nn.Sequential(# padding_mode = "reflect" 增強特征提取nn.Conv2d(in_channel,out_channel, 3, 1, 1, padding_mode="reflect", bias = False),nn.BatchNorm2d(out_channel), # 二維批歸一化層,歸一化卷積層的輸出。用于加速訓(xùn)練和增強模型的泛化能力。nn.Dropout2d(0.3), # 二維隨機失活層,以概率0.3隨機抑制特征,用于防止過擬合。nn.LeakyReLU(), # 帶有負(fù)斜率的修正線性單元激活函數(shù),引入非線性變換。nn.Conv2d(out_channel, out_channel, 3, 1, 1, padding_mode="reflect", bias = False),nn.BatchNorm2d(out_channel),nn.Dropout(0.3),nn.LeakyReLU())def forward(self, x):return self.layer(x)
下采樣層
UNet網(wǎng)絡(luò)的下采樣層中進(jìn)行了一次卷積。下采樣將圖像大小減半,通道數(shù)不變,同時保留更多的重要特征。
class DownSample(nn.Module):def __init__(self, channel):super(DownSample, self).__init__()self.layer = nn.Sequential(nn.Conv2d(channel, channel, 3, 2, 1, padding_mode="reflect", bias = False),nn.BatchNorm2d(channel),nn.LeakyReLU())def forward(self, x):return self.layer(x)
上采樣層
UNet網(wǎng)絡(luò)的上采樣層中進(jìn)行了一次卷積操作和雙線性插值上采樣。卷積用于減低通道數(shù),并將其與上一層的特征圖進(jìn)行拼接。用于恢復(fù)圖像大小,同時提取更加精細(xì)的特征。
class UpSample(nn.Module):def __init__(self, channel):super(UpSample, self).__init__()self.layer = nn.Conv2d(channel, channel // 2, 1, 1)def forward(self, x, feature_map):up = F.interpolate(x, scale_factor=2, mode="nearest")out = self.layer(up)return torch.cat((out, feature_map), dim = 1)
網(wǎng)絡(luò)模型
UNet網(wǎng)絡(luò)模型由編碼器和解碼器兩部分組成。編碼器包含了四個 Conv_Block 和四個 DownSample 層,用于逐步提取圖像的高級特征。解碼器包含了四個 UpSample 和四個 Conv_Block 層,用于通過上采樣和特征融合從編碼器中恢復(fù)圖像的細(xì)節(jié)。最后通過一個卷積層和 Sigmoid 激活函數(shù)得到二分類輸出,用于分割圖像。
class UNet(nn.Module):def __init__(self):super(UNet,self).__init__()self.c1 = Conv_Block(3, 64)self.d1 = DownSample(64)self.c2 = Conv_Block(64, 128)self.d2 = DownSample(128)self.c3 = Conv_Block(128, 256)self.d3 = DownSample(256)self.c4 = Conv_Block(256,512)self.d4 = DownSample(512)self.c5 = Conv_Block(512, 1024)self.u1 = UpSample(1024)self.c6 = Conv_Block(1024, 512)self.u2 = UpSample(512)self.c7 = Conv_Block(512, 256)self.u3 = UpSample(256)self.c8 = Conv_Block(256, 128)self.u4 = UpSample(128)self.c9 = Conv_Block(128, 64)self.out = nn.Conv2d(64,3,3,1,1)# 二分類self.Th = nn.Sigmoid()def forward(self, x):R1 = self.c1(x)R2 = self.c2(self.d1(R1))R3 = self.c3(self.d2(R2))R4 = self.c4(self.d3(R3))R5 = self.c5(self.d4(R4))O1 = self.c6(self.u1(R5, R4))O2 = self.c7(self.u2(O1, R3))O3 = self.c8(self.u3(O2, R2))O4 = self.c9(self.u4(O3, R1))return self.Th(self.out(O4))
測試
一致則正確。
if __name__ == "__main__":x = torch.randn(2,3,256,256)net=UNet()print(net(x).shape)