中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

西安哪個(gè)公司可以做網(wǎng)站城市分站seo

西安哪個(gè)公司可以做網(wǎng)站,城市分站seo,網(wǎng)站首頁輪播怎么做,新手怎么學(xué)電商運(yùn)營本文介紹一些注意力機(jī)制的實(shí)現(xiàn),包括EA/MHSA/SK/DA/EPSA。 【深度學(xué)習(xí)】注意力機(jī)制(一) 【深度學(xué)習(xí)】注意力機(jī)制(三) 目錄 一、EA(External Attention) 二、Multi Head Self Attention 三、…

本文介紹一些注意力機(jī)制的實(shí)現(xiàn),包括EA/MHSA/SK/DA/EPSA。

【深度學(xué)習(xí)】注意力機(jī)制(一)

【深度學(xué)習(xí)】注意力機(jī)制(三)

目錄

一、EA(External Attention)

二、Multi Head Self Attention

三、SK(Selective Kernel Networks)

四、DA(Dual Attention)

五、EPSA(Efficient Pyramid Squeeze Attention)


一、EA(External Attention)

EA可以關(guān)注全局的空間信息,論文:論文地址

如下圖:

代碼如下(代碼連接):

import numpy as np
import torch
from torch import nn
from torch.nn import initclass External_attention(nn.Module):'''Arguments:c (int): The input and output channel number.'''def __init__(self, c):super(External_attention, self).__init__()self.conv1 = nn.Conv2d(c, c, 1)self.k = 64self.linear_0 = nn.Conv1d(c, self.k, 1, bias=False)self.linear_1 = nn.Conv1d(self.k, c, 1, bias=False)self.linear_1.weight.data = self.linear_0.weight.data.permute(1, 0, 2)        self.conv2 = nn.Sequential(nn.Conv2d(c, c, 1, bias=False),norm_layer(c))        for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.Conv1d):n = m.kernel_size[0] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, _BatchNorm):m.weight.data.fill_(1)if m.bias is not None:m.bias.data.zero_()def forward(self, x):idn = xx = self.conv1(x)b, c, h, w = x.size()n = h*wx = x.view(b, c, h*w)   # b * c * n attn = self.linear_0(x) # b, k, nattn = F.softmax(attn, dim=-1) # b, k, nattn = attn / (1e-9 + attn.sum(dim=1, keepdim=True)) #  # b, k, nx = self.linear_1(attn) # b, c, nx = x.view(b, c, h, w)x = self.conv2(x)x = x + idnx = F.relu(x)return x

二、Multi Head Self Attention

注意力機(jī)制的經(jīng)典,Transformer的基石。論文:論文地址

如下圖:

代碼如下(代碼連接):

import numpy as np
import torch
from torch import nn
from torch.nn import initclass ScaledDotProductAttention(nn.Module):'''Scaled dot-product attention'''def __init__(self, d_model, d_k, d_v, h,dropout=.1):''':param d_model: Output dimensionality of the model:param d_k: Dimensionality of queries and keys:param d_v: Dimensionality of values:param h: Number of heads'''super(ScaledDotProductAttention, self).__init__()self.fc_q = nn.Linear(d_model, h * d_k)self.fc_k = nn.Linear(d_model, h * d_k)self.fc_v = nn.Linear(d_model, h * d_v)self.fc_o = nn.Linear(h * d_v, d_model)self.dropout=nn.Dropout(dropout)self.d_model = d_modelself.d_k = d_kself.d_v = d_vself.h = hself.init_weights()def init_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):init.kaiming_normal_(m.weight, mode='fan_out')if m.bias is not None:init.constant_(m.bias, 0)elif isinstance(m, nn.BatchNorm2d):init.constant_(m.weight, 1)init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):init.normal_(m.weight, std=0.001)if m.bias is not None:init.constant_(m.bias, 0)def forward(self, queries, keys, values, attention_mask=None, attention_weights=None):'''Computes:param queries: Queries (b_s, nq, d_model):param keys: Keys (b_s, nk, d_model):param values: Values (b_s, nk, d_model):param attention_mask: Mask over attention values (b_s, h, nq, nk). True indicates masking.:param attention_weights: Multiplicative weights for attention values (b_s, h, nq, nk).:return:'''b_s, nq = queries.shape[:2]nk = keys.shape[1]q = self.fc_q(queries).view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3)  # (b_s, h, nq, d_k)k = self.fc_k(keys).view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1)  # (b_s, h, d_k, nk)v = self.fc_v(values).view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3)  # (b_s, h, nk, d_v)att = torch.matmul(q, k) / np.sqrt(self.d_k)  # (b_s, h, nq, nk)if attention_weights is not None:att = att * attention_weightsif attention_mask is not None:att = att.masked_fill(attention_mask, -np.inf)att = torch.softmax(att, -1)att=self.dropout(att)out = torch.matmul(att, v).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v)  # (b_s, nq, h*d_v)out = self.fc_o(out)  # (b_s, nq, d_model)return out

三、SK(Selective Kernel Networks)

SK是通道注意力機(jī)制。論文地址:論文連接

如下圖:

代碼如下(代碼連接):

import numpy as np
import torch
from torch import nn
from torch.nn import init
from collections import OrderedDictclass SKAttention(nn.Module):def __init__(self, channel=512,kernels=[1,3,5,7],reduction=16,group=1,L=32):super().__init__()self.d=max(L,channel//reduction)self.convs=nn.ModuleList([])for k in kernels:self.convs.append(nn.Sequential(OrderedDict([('conv',nn.Conv2d(channel,channel,kernel_size=k,padding=k//2,groups=group)),('bn',nn.BatchNorm2d(channel)),('relu',nn.ReLU())])))self.fc=nn.Linear(channel,self.d)self.fcs=nn.ModuleList([])for i in range(len(kernels)):self.fcs.append(nn.Linear(self.d,channel))self.softmax=nn.Softmax(dim=0)def forward(self, x):bs, c, _, _ = x.size()conv_outs=[]### splitfor conv in self.convs:conv_outs.append(conv(x))feats=torch.stack(conv_outs,0)#k,bs,channel,h,w### fuseU=sum(conv_outs) #bs,c,h,w### reduction channelS=U.mean(-1).mean(-1) #bs,cZ=self.fc(S) #bs,d### calculate attention weightweights=[]for fc in self.fcs:weight=fc(Z)weights.append(weight.view(bs,c,1,1)) #bs,channelattention_weughts=torch.stack(weights,0)#k,bs,channel,1,1attention_weughts=self.softmax(attention_weughts)#k,bs,channel,1,1### fuseV=(attention_weughts*feats).sum(0)return V

四、DA(Dual Attention)

DA融合了通道注意力和空間注意力機(jī)制。論文:論文地址

如下圖:

代碼(代碼連接):

import numpy as np
import torch
from torch import nn
from torch.nn import init
from model.attention.SelfAttention import ScaledDotProductAttention
from model.attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttentionclass PositionAttentionModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.cnn=nn.Conv2d(d_model,d_model,kernel_size=kernel_size,padding=(kernel_size-1)//2)self.pa=ScaledDotProductAttention(d_model,d_k=d_model,d_v=d_model,h=1)def forward(self,x):bs,c,h,w=x.shapey=self.cnn(x)y=y.view(bs,c,-1).permute(0,2,1) #bs,h*w,cy=self.pa(y,y,y) #bs,h*w,creturn yclass ChannelAttentionModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.cnn=nn.Conv2d(d_model,d_model,kernel_size=kernel_size,padding=(kernel_size-1)//2)self.pa=SimplifiedScaledDotProductAttention(H*W,h=1)def forward(self,x):bs,c,h,w=x.shapey=self.cnn(x)y=y.view(bs,c,-1) #bs,c,h*wy=self.pa(y,y,y) #bs,c,h*wreturn yclass DAModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.position_attention_module=PositionAttentionModule(d_model=512,kernel_size=3,H=7,W=7)self.channel_attention_module=ChannelAttentionModule(d_model=512,kernel_size=3,H=7,W=7)def forward(self,input):bs,c,h,w=input.shapep_out=self.position_attention_module(input)c_out=self.channel_attention_module(input)p_out=p_out.permute(0,2,1).view(bs,c,h,w)c_out=c_out.view(bs,c,h,w)return p_out+c_out

五、EPSA(Efficient Pyramid Squeeze Attention)

論文:論文地址

如下圖:

代碼如下(代碼連接):

import torch.nn as nnclass SEWeightModule(nn.Module):def __init__(self, channels, reduction=16):super(SEWeightModule, self).__init__()self.avg_pool = nn.AdaptiveAvgPool2d(1)self.fc1 = nn.Conv2d(channels, channels//reduction, kernel_size=1, padding=0)self.relu = nn.ReLU(inplace=True)self.fc2 = nn.Conv2d(channels//reduction, channels, kernel_size=1, padding=0)self.sigmoid = nn.Sigmoid()def forward(self, x):out = self.avg_pool(x)out = self.fc1(out)out = self.relu(out)out = self.fc2(out)weight = self.sigmoid(out)return weightdef conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1, groups=1):"""standard convolution with padding"""return nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,padding=padding, dilation=dilation, groups=groups, bias=False)def conv1x1(in_planes, out_planes, stride=1):"""1x1 convolution"""return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)class PSAModule(nn.Module):def __init__(self, inplans, planes, conv_kernels=[3, 5, 7, 9], stride=1, conv_groups=[1, 4, 8, 16]):super(PSAModule, self).__init__()self.conv_1 = conv(inplans, planes//4, kernel_size=conv_kernels[0], padding=conv_kernels[0]//2,stride=stride, groups=conv_groups[0])self.conv_2 = conv(inplans, planes//4, kernel_size=conv_kernels[1], padding=conv_kernels[1]//2,stride=stride, groups=conv_groups[1])self.conv_3 = conv(inplans, planes//4, kernel_size=conv_kernels[2], padding=conv_kernels[2]//2,stride=stride, groups=conv_groups[2])self.conv_4 = conv(inplans, planes//4, kernel_size=conv_kernels[3], padding=conv_kernels[3]//2,stride=stride, groups=conv_groups[3])self.se = SEWeightModule(planes // 4)self.split_channel = planes // 4self.softmax = nn.Softmax(dim=1)def forward(self, x):batch_size = x.shape[0]x1 = self.conv_1(x)x2 = self.conv_2(x)x3 = self.conv_3(x)x4 = self.conv_4(x)feats = torch.cat((x1, x2, x3, x4), dim=1)feats = feats.view(batch_size, 4, self.split_channel, feats.shape[2], feats.shape[3])x1_se = self.se(x1)x2_se = self.se(x2)x3_se = self.se(x3)x4_se = self.se(x4)x_se = torch.cat((x1_se, x2_se, x3_se, x4_se), dim=1)attention_vectors = x_se.view(batch_size, 4, self.split_channel, 1, 1)attention_vectors = self.softmax(attention_vectors)feats_weight = feats * attention_vectorsfor i in range(4):x_se_weight_fp = feats_weight[:, i, :, :]if i == 0:out = x_se_weight_fpelse:out = torch.cat((x_se_weight_fp, out), 1)return out

http://www.risenshineclean.com/news/22924.html

相關(guān)文章:

  • 網(wǎng)站建設(shè) 鴻商品關(guān)鍵詞怎么優(yōu)化
  • 企業(yè)網(wǎng)站建設(shè)服務(wù)熱線百度q3財(cái)報(bào)減虧170億
  • 企業(yè)注冊(cè)登記seo對(duì)網(wǎng)店推廣的作用
  • 咸陽網(wǎng)站制作公司廣州seo顧問
  • 西安網(wǎng)紅打卡地成都網(wǎng)站seo費(fèi)用
  • 深圳網(wǎng)站開發(fā)報(bào)價(jià)友情鏈接網(wǎng)站免費(fèi)
  • 做網(wǎng)站建設(shè)優(yōu)化的公司優(yōu)化教程網(wǎng)下載
  • 懷來住房和城鄉(xiāng)建設(shè)委員會(huì)網(wǎng)站網(wǎng)絡(luò)推廣公司有多少家
  • arbitrary wordpress蔡甸seo排名公司
  • wordpress布局可視化武漢seo顧問
  • 順德網(wǎng)站建設(shè)找順的百度關(guān)鍵字優(yōu)化精靈
  • html視頻網(wǎng)站源碼百度高級(jí)搜索技巧
  • 網(wǎng)站建設(shè)的概念成都網(wǎng)站建設(shè)技術(shù)支持
  • 電子商務(wù)網(wǎng)站開發(fā)教程書內(nèi)代碼推廣普通話宣傳周
  • 傻瓜式網(wǎng)站制作軟件seo百科
  • 南網(wǎng)站建設(shè)如何進(jìn)行網(wǎng)站性能優(yōu)化?
  • IT男做網(wǎng)站網(wǎng)絡(luò)宣傳推廣方法
  • 廣東網(wǎng)站建設(shè)制作網(wǎng)絡(luò)推廣方案的內(nèi)容
  • wordpress writr東莞優(yōu)化疫情防控措施
  • 做外包的網(wǎng)站網(wǎng)站設(shè)計(jì)
  • 網(wǎng)站和app區(qū)別與聯(lián)系seo機(jī)構(gòu)
  • 網(wǎng)站底部有很多圖標(biāo)設(shè)計(jì)師培訓(xùn)班多少錢
  • 網(wǎng)站開發(fā) 方案網(wǎng)站搭建需要什么技術(shù)
  • 手機(jī)網(wǎng)站建設(shè) 小程序短視頻營銷案例
  • 定制型網(wǎng)站建設(shè)服務(wù)線上推廣費(fèi)用
  • 深圳led網(wǎng)站建設(shè)天津債務(wù)優(yōu)化公司
  • 如何做網(wǎng)站設(shè)計(jì)營銷型網(wǎng)站建設(shè)怎么做
  • wordpress菜單怎么設(shè)置目錄冊(cè)曲靖seo
  • 網(wǎng)站建設(shè)預(yù)算申請(qǐng)百度收錄時(shí)間
  • 網(wǎng)站打不開 域名做解析網(wǎng)絡(luò)營銷環(huán)境宏觀微觀分析