網(wǎng)絡(luò)規(guī)劃設(shè)計(jì)師需要的基礎(chǔ)百度網(wǎng)站怎樣優(yōu)化排名
文章目錄
- 0 前言
- 1 課題背景
- 2 使用CNN進(jìn)行貓狗分類
- 3 數(shù)據(jù)集處理
- 4 神經(jīng)網(wǎng)絡(luò)的編寫
- 5 Tensorflow計(jì)算圖的構(gòu)建
- 6 模型的訓(xùn)練和測(cè)試
- 7 預(yù)測(cè)效果
- 8 最后
0 前言
🔥 優(yōu)質(zhì)競(jìng)賽項(xiàng)目系列,今天要分享的是
🚩 **基于深度學(xué)習(xí)貓狗分類 **
該項(xiàng)目較為新穎,適合作為競(jìng)賽課題方向,學(xué)長(zhǎng)非常推薦!
🥇學(xué)長(zhǎng)這里給一個(gè)題目綜合評(píng)分(每項(xiàng)滿分5分)
- 難度系數(shù):3分
- 工作量:3分
- 創(chuàng)新點(diǎn):3分
🧿 更多資料, 項(xiàng)目分享:
https://gitee.com/dancheng-senior/postgraduate
1 課題背景
要說到深度學(xué)習(xí)圖像分類的經(jīng)典案例之一,那就是貓狗大戰(zhàn)了。貓和狗在外觀上的差別還是挺明顯的,無論是體型、四肢、臉龐和毛發(fā)等等,
都是能通過肉眼很容易區(qū)分的。那么如何讓機(jī)器來識(shí)別貓和狗呢?這就需要使用卷積神經(jīng)網(wǎng)絡(luò)來實(shí)現(xiàn)了。
本項(xiàng)目的主要目標(biāo)是開發(fā)一個(gè)可以識(shí)別貓狗圖像的系統(tǒng)。分析輸入圖像,然后預(yù)測(cè)輸出。實(shí)現(xiàn)的模型可以根據(jù)需要擴(kuò)展到網(wǎng)站或任何移動(dòng)設(shè)備。我們的主要目標(biāo)是讓模型學(xué)習(xí)貓和狗的各種獨(dú)特特征。一旦模型的訓(xùn)練完成,它將能夠區(qū)分貓和狗的圖像。
2 使用CNN進(jìn)行貓狗分類
卷積神經(jīng)網(wǎng)絡(luò) (CNN)
是一種算法,將圖像作為輸入,然后為圖像的所有方面分配權(quán)重和偏差,從而區(qū)分彼此。神經(jīng)網(wǎng)絡(luò)可以通過使用成批的圖像進(jìn)行訓(xùn)練,每個(gè)圖像都有一個(gè)標(biāo)簽來識(shí)別圖像的真實(shí)性質(zhì)(這里是貓或狗)。一個(gè)批次可以包含十分之幾到數(shù)百個(gè)圖像。
對(duì)于每張圖像,將網(wǎng)絡(luò)預(yù)測(cè)與相應(yīng)的現(xiàn)有標(biāo)簽進(jìn)行比較,并評(píng)估整個(gè)批次的網(wǎng)絡(luò)預(yù)測(cè)與真實(shí)值之間的距離。然后,修改網(wǎng)絡(luò)參數(shù)以最小化距離,從而增加網(wǎng)絡(luò)的預(yù)測(cè)能力。類似地,每個(gè)批次的訓(xùn)練過程都是類似的。
3 數(shù)據(jù)集處理
貓狗照片的數(shù)據(jù)集直接從kaggle官網(wǎng)下載即可,下載后解壓,這是我下載的數(shù)據(jù):
相關(guān)代碼
?
import os,shutiloriginal_data_dir = "G:/Data/Kaggle/dogcat/train"base_dir = "G:/Data/Kaggle/dogcat/smallData"if os.path.isdir(base_dir) == False:os.mkdir(base_dir)# 創(chuàng)建三個(gè)文件夾用來存放不同的數(shù)據(jù):train,validation,testtrain_dir = os.path.join(base_dir,'train')if os.path.isdir(train_dir) == False:os.mkdir(train_dir)validation_dir = os.path.join(base_dir,'validation')if os.path.isdir(validation_dir) == False:os.mkdir(validation_dir)test_dir = os.path.join(base_dir,'test')if os.path.isdir(test_dir) == False:os.mkdir(test_dir)# 在文件中:train,validation,test分別創(chuàng)建cats,dogs文件夾用來存放對(duì)應(yīng)的數(shù)據(jù)train_cats_dir = os.path.join(train_dir,'cats')if os.path.isdir(train_cats_dir) == False:os.mkdir(train_cats_dir)train_dogs_dir = os.path.join(train_dir,'dogs')if os.path.isdir(train_dogs_dir) == False:os.mkdir(train_dogs_dir)validation_cats_dir = os.path.join(validation_dir,'cats')if os.path.isdir(validation_cats_dir) == False:os.mkdir(validation_cats_dir)validation_dogs_dir = os.path.join(validation_dir,'dogs')if os.path.isdir(validation_dogs_dir) == False:os.mkdir(validation_dogs_dir)test_cats_dir = os.path.join(test_dir,'cats')if os.path.isdir(test_cats_dir) == False:os.mkdir(test_cats_dir)test_dogs_dir = os.path.join(test_dir,'dogs')if os.path.isdir(test_dogs_dir) == False:os.mkdir(test_dogs_dir)#將原始數(shù)據(jù)拷貝到對(duì)應(yīng)的文件夾中 catfnames = ['cat.{}.jpg'.format(i) for i in range(1000)]for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(train_cats_dir,fname)shutil.copyfile(src,dst)fnames = ['cat.{}.jpg'.format(i) for i in range(1000,1500)]for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(validation_cats_dir,fname)shutil.copyfile(src,dst)fnames = ['cat.{}.jpg'.format(i) for i in range(1500,2000)]for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(test_cats_dir,fname)shutil.copyfile(src,dst)#將原始數(shù)據(jù)拷貝到對(duì)應(yīng)的文件夾中 dog
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(train_dogs_dir,fname)shutil.copyfile(src,dst)fnames = ['dog.{}.jpg'.format(i) for i in range(1000,1500)]
for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(validation_dogs_dir,fname)shutil.copyfile(src,dst)fnames = ['dog.{}.jpg'.format(i) for i in range(1500,2000)]
for fname in fnames:src = os.path.join(original_data_dir,fname)dst = os.path.join(test_dogs_dir,fname)shutil.copyfile(src,dst)
print('train cat images:', len(os.listdir(train_cats_dir)))
print('train dog images:', len(os.listdir(train_dogs_dir)))
print('validation cat images:', len(os.listdir(validation_cats_dir)))
print('validation dog images:', len(os.listdir(validation_dogs_dir)))
print('test cat images:', len(os.listdir(test_cats_dir)))
print('test dog images:', len(os.listdir(test_dogs_dir)))
train cat images: 1000
train dog images: 1000
validation cat images: 500
validation dog images: 500
test cat images: 500
test dog images: 500
4 神經(jīng)網(wǎng)絡(luò)的編寫
cnn卷積神經(jīng)網(wǎng)絡(luò)的編寫如下,編寫卷積層、池化層和全連接層的代碼
?
conv1_1 = tf.layers.conv2d(x, 16, (3, 3), padding='same', activation=tf.nn.relu, name='conv1_1')
conv1_2 = tf.layers.conv2d(conv1_1, 16, (3, 3), padding='same', activation=tf.nn.relu, name='conv1_2')
pool1 = tf.layers.max_pooling2d(conv1_2, (2, 2), (2, 2), name='pool1')
conv2_1 = tf.layers.conv2d(pool1, 32, (3, 3), padding='same', activation=tf.nn.relu, name='conv2_1')
conv2_2 = tf.layers.conv2d(conv2_1, 32, (3, 3), padding='same', activation=tf.nn.relu, name='conv2_2')
pool2 = tf.layers.max_pooling2d(conv2_2, (2, 2), (2, 2), name='pool2')
conv3_1 = tf.layers.conv2d(pool2, 64, (3, 3), padding='same', activation=tf.nn.relu, name='conv3_1')
conv3_2 = tf.layers.conv2d(conv3_1, 64, (3, 3), padding='same', activation=tf.nn.relu, name='conv3_2')
pool3 = tf.layers.max_pooling2d(conv3_2, (2, 2), (2, 2), name='pool3')
conv4_1 = tf.layers.conv2d(pool3, 128, (3, 3), padding='same', activation=tf.nn.relu, name='conv4_1')
conv4_2 = tf.layers.conv2d(conv4_1, 128, (3, 3), padding='same', activation=tf.nn.relu, name='conv4_2')
pool4 = tf.layers.max_pooling2d(conv4_2, (2, 2), (2, 2), name='pool4')flatten = tf.layers.flatten(pool4)
fc1 = tf.layers.dense(flatten, 512, tf.nn.relu)
fc1_dropout = tf.nn.dropout(fc1, keep_prob=keep_prob)
fc2 = tf.layers.dense(fc1, 256, tf.nn.relu)
fc2_dropout = tf.nn.dropout(fc2, keep_prob=keep_prob)
fc3 = tf.layers.dense(fc2, 2, None)
5 Tensorflow計(jì)算圖的構(gòu)建
然后,再搭建tensorflow的計(jì)算圖,定義占位符,計(jì)算損失函數(shù)、預(yù)測(cè)值和準(zhǔn)確率等等
?
self.x = tf.placeholder(tf.float32, [None, IMAGE_SIZE, IMAGE_SIZE, 3], 'input_data')
self.y = tf.placeholder(tf.int64, [None], 'output_data')
self.keep_prob = tf.placeholder(tf.float32)
# 圖片輸入網(wǎng)絡(luò)中
fc = self.conv_net(self.x, self.keep_prob)
self.loss = tf.losses.sparse_softmax_cross_entropy(labels=self.y, logits=fc)
self.y_ = tf.nn.softmax(fc) # 計(jì)算每一類的概率
self.predict = tf.argmax(fc, 1)
self.acc = tf.reduce_mean(tf.cast(tf.equal(self.predict, self.y), tf.float32))
self.train_op = tf.train.AdamOptimizer(LEARNING_RATE).minimize(self.loss)
self.saver = tf.train.Saver(max_to_keep=1)
最后的saver是要將訓(xùn)練好的模型保存到本地。
6 模型的訓(xùn)練和測(cè)試
然后編寫訓(xùn)練部分的代碼,訓(xùn)練步驟為1萬步
?
acc_list = []
with tf.Session() as sess:sess.run(tf.global_variables_initializer())for i in range(TRAIN_STEP):train_data, train_label, _ = self.batch_train_data.next_batch(TRAIN_SIZE)eval_ops = [self.loss, self.acc, self.train_op]eval_ops_results = sess.run(eval_ops, feed_dict={self.x:train_data,self.y:train_label,self.keep_prob:0.7})loss_val, train_acc = eval_ops_results[0:2]acc_list.append(train_acc)if (i+1) % 100 == 0:acc_mean = np.mean(acc_list)print('step:{0},loss:{1:.5},acc:{2:.5},acc_mean:{3:.5}'.format(i+1,loss_val,train_acc,acc_mean))if (i+1) % 1000 == 0:test_acc_list = []for j in range(TEST_STEP):test_data, test_label, _ = self.batch_test_data.next_batch(TRAIN_SIZE)acc_val = sess.run([self.acc],feed_dict={self.x:test_data,self.y:test_label,self.keep_prob:1.0})test_acc_list.append(acc_val)print('[Test ] step:{0}, mean_acc:{1:.5}'.format(i+1, np.mean(test_acc_list)))# 保存訓(xùn)練后的模型os.makedirs(SAVE_PATH, exist_ok=True)self.saver.save(sess, SAVE_PATH + 'my_model.ckpt')
訓(xùn)練結(jié)果如下:
訓(xùn)練1萬步后模型測(cè)試的平均準(zhǔn)確率有0.82。
7 預(yù)測(cè)效果
選取三張圖片測(cè)試
可見,模型準(zhǔn)確率還是較高的。
8 最后
🧿 更多資料, 項(xiàng)目分享:
https://gitee.com/dancheng-senior/postgraduate