公司網(wǎng)站模板下載全網(wǎng)軟文推廣
回歸預測 | MATLAB實現(xiàn)DBN-ELM深度置信網(wǎng)絡(luò)結(jié)合極限學習機多輸入單輸出回歸預測
目錄
- 回歸預測 | MATLAB實現(xiàn)DBN-ELM深度置信網(wǎng)絡(luò)結(jié)合極限學習機多輸入單輸出回歸預測
- 預測效果
- 基本介紹
- 模型描述
- 程序設(shè)計
- 參考資料
預測效果
基本介紹
1.MATLAB實現(xiàn)DBN-ELM深度置信網(wǎng)絡(luò)結(jié)合極限學習機多輸入單輸出回歸預測;
2.多輸入單輸出回歸預測。
3.深度信念網(wǎng)絡(luò),DBN,Deep Belief Nets,神經(jīng)網(wǎng)絡(luò)的一種。既可以用于非監(jiān)督學習,類似于一個自編碼機;也可以用于監(jiān)督學習,作為分類器來使用。DBN由若干層神經(jīng)元構(gòu)成,組成元件是受限玻爾茲曼機(RBM)。
RBM是一種神經(jīng)感知器,由一個顯層和一個隱層構(gòu)成,顯層與隱層的神經(jīng)元之間為雙向全連接。限制玻爾茲曼機和玻爾茲曼機相比,主要是加入了“限制”。限制玻爾茲曼機可以用于降維(隱層少一點),學習特征(隱層輸出就是特征),深度信念網(wǎng)絡(luò)(多個RBM堆疊而成)等。
模型描述
受限玻爾茲曼機(RBM)是一種具有隨機性的生成神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),它本質(zhì)上是一種由具有隨機性的一層可見神經(jīng)元和一層隱藏神經(jīng)元所構(gòu)成的無向圖模型。它只有在隱藏層和可見層神經(jīng)元之間有連接,可見層神經(jīng)元之間以及隱藏層神經(jīng)元之間都沒有連接。并且,隱藏層神經(jīng)元通常取二進制并服從伯努利分布,可見層神經(jīng)元可以根據(jù)輸入的類型取二進制或者實數(shù)值。
程序設(shè)計
- 完整源碼和數(shù)據(jù)獲取方式:私信博主回復MATLAB實現(xiàn)DBN-ELM深度置信網(wǎng)絡(luò)結(jié)合極限學習機多輸入單輸出回歸預測。
%受限玻爾茲曼機預訓練程序
% This program trains Restricted Boltzmann Machine in which
% visible, binary, stochastic pixels are connected to
% hidden, binary, stochastic feature detectors using symmetrically
% weighted connections. Learning is done with 1-step Contrastive Divergence.
% The program assumes that the following variables are set externally:
% maxepoch -- maximum number of epochs
% numhid -- number of hidden units
% batchdata -- the data that is divided into batches (numcases numdims numbatches)
% restart -- set to 1 if learning starts from beginning
% 參數(shù)設(shè)置
epsilonw = 0.01; % Learning rate for weights 權(quán)值學習率
epsilonvb = 0.01; % Learning rate for biases of visible units可視節(jié)點的偏置學習率
epsilonhb = 0.01; % Learning rate for biases of hidden units 隱含節(jié)點的偏置學習率
weightcost = 0.0008; %權(quán)重衰減系數(shù)
initialmomentum = 0.5; %初始動量項
finalmomentum = 0.9; %確定動量項[numcases,numdims ,numbatches]=size(batchdata);if restart ==1,restart=0;epoch=1;% Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid);%可視節(jié)點到隱含節(jié)點之間的權(quán)值初始化hidbiases = zeros(1,numhid);%隱含節(jié)點的初始化為0visbiases = zeros(1,numdims);%可視節(jié)點偏置初始化為0poshidprobs = zeros(numcases,numhid);%初始化單個迷你塊正向傳播時隱含層的輸出概率neghidprobs = zeros(numcases,numhid);posprods = zeros(numdims,numhid);negprods = zeros(numdims,numhid);vishidinc = zeros(numdims,numhid);hidbiasinc = zeros(1,numhid);visbiasinc = zeros(1,numdims);%整個數(shù)據(jù)集正向傳播時隱含層的輸出概率batchposhidprobs=zeros(numcases,numhid,numbatches);
endfor epoch = epoch:maxepoch,%所有迭代次數(shù)%fprintf(1,'epoch %d\r',epoch); errsum=0;for batch = 1:numbatches, %每次迭代都遍歷所有的數(shù)據(jù)塊%fprintf(1,'epoch %d batch %d\r',epoch,batch); %%%%%%%%% 開始正向階段的計算%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%data = batchdata(:,:,batch); %每次迭代選擇一個迷你塊的數(shù)據(jù),,每一行代表一個樣本值%這里的數(shù)據(jù)并非二值的,嚴格來說,應該將其進行二值化poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); %計算隱含層節(jié)點的輸出概率,所用的是sigmoid函數(shù)%%%計算正向階段的參數(shù)統(tǒng)計量%%%%%%%%%%%%%%%%%%%%batchposhidprobs(:,:,batch)=poshidprobs;posprods = data' * poshidprobs;%用可視節(jié)點向量和隱含層節(jié)點向量的乘積計算正向散度統(tǒng)計量poshidact = sum(poshidprobs); %針對樣本值進行求和,用于計算隱含節(jié)點的偏置posvisact = sum(data); %對數(shù)據(jù)進行求和,用于計算可視節(jié)點的偏置,當迷你塊中樣本的個數(shù)為1時,% 求得的偏置向量中的又換宿相同,此時會影響預訓練的結(jié)果%%%%%%%%% 正向階段結(jié)束 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%poshidstates = poshidprobs > rand(numcases,numhid);%將隱含層的概率激活值poshidprobs進行0.1二值化,按照概率值大小來判定。rand(m,n)產(chǎn)生%m*n大小的矩陣,將poshidprobs中的值和rand產(chǎn)生的比較,如果大于隨機矩陣對應位置的值,則將其相應位置為1,否則為0%%%%%%%%% 開始反向階段的計算 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%negdata = 1./(1 + exp(-poshidstates*vishid' - repmat(visbiases,numcases,1)));%反向階段計算可視節(jié)點的值neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); %計算隱含層節(jié)點的概率值negprods = negdata'*neghidprobs;%計算反向散度統(tǒng)計量neghidact = sum(neghidprobs);negvisact = sum(negdata); %%%%%%%%% 反向階段結(jié)束 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%err= sum(sum( (data-negdata).^2 ));%計算訓練集中原始數(shù)據(jù)和重構(gòu)數(shù)據(jù)之間的重構(gòu)誤差errsum = err + errsum;if epoch>5,momentum=finalmomentum; %在迭代更新參數(shù)過程中,前4次使用初始動量項,之后使用確定動量項elsemomentum=initialmomentum;end;%%%%%%%%%以下代碼用于更新權(quán)值和偏置%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ...%權(quán)值更新時的增量epsilonw*( (posprods-negprods)/numcases - weightcost*vishid);visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact);%可視層偏置更新時的增量hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact);%隱含層偏置更新時的增量vishid = vishid + vishidinc;%更新權(quán)值visbiases = visbiases + visbiasinc;%更新可視層偏置hidbiases = hidbiases + hidbiasinc;%更新隱含層偏置%%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end%fprintf(1, 'epoch %4i error %6.1f \n', epoch, errsum);
end;
%%%每次迭代結(jié)束后,顯示訓練集的重構(gòu)誤差
參考資料
[1] https://blog.csdn.net/article/details/126195343?spm=1001.2014.3001.5501
[2] https://blog.csdn.net/article/details/126189867?spm=1001.2014.3001.5501