免費(fèi)的網(wǎng)站在線客服軟件營(yíng)銷案例100例小故事及感悟
目錄
測(cè)試TensorFlow是否支持GPU:
自動(dòng)求導(dǎo):
?數(shù)據(jù)預(yù)處理 之 統(tǒng)一數(shù)組維度
?定義變量和常量
?訓(xùn)練模型的時(shí)候設(shè)備變量的設(shè)置
生成隨機(jī)數(shù)據(jù)
交叉熵?fù)p失CE和均方誤差函數(shù)MSE?
全連接Dense層
維度變換reshape
增加或減小維度
數(shù)組合并
廣播機(jī)制:
簡(jiǎn)單范數(shù)運(yùn)算
?矩陣轉(zhuǎn)置
框架本身只是用來編寫的工具,每個(gè)框架包括Pytorch,tensorflow、mxnet、paddle、mandspore等等框架編程語言上其實(shí)差別是大同小異的,不同的點(diǎn)是他們?cè)诰幾g方式、運(yùn)行方式或者計(jì)算速度上,我也淺淺的學(xué)習(xí)一下這個(gè)框架以便于看github上的代碼可以輕松些。
我的環(huán)境:
google colab的T4 GPU
首先是
測(cè)試TensorFlow是否支持GPU:
打開tf的config包,里面有個(gè)list_pysical_devices("GPU")
import os
import tensorflow as tfos.environ['TF_CPP_Min_LOG_LEVEL']='3'
os.system("clear")
print("GPU列表:",tf.config.list_logical_devices("GPU"))
運(yùn)行結(jié)果:
GPU列表: [LogicalDevice(name='/device:GPU:0', device_type='GPU')]
檢測(cè)運(yùn)行時(shí)間:
def run():n=1000#CPU計(jì)算矩陣with tf.device('/cpu:0'):cpu_a = tf.random.normal([n,n])cpu_b = tf.random.normal([n,n])print(cpu_a.device,cpu_b.device)#GPU計(jì)算矩陣with tf.device('/gpu:0'):gpu_a = tf.random.normal([n,n])gpu_b = tf.random.normal([n,n])print(gpu_a.device,gpu_b.device)def cpu_run():with tf.device('/cpu:0'):c = tf.matmul(cpu_a,cpu_b)return cdef gpu_run():with tf.device('/cpu:0'):c = tf.matmul(gpu_a,gpu_b)return cnumber=1000print("初次運(yùn)行:")cpu_time=timeit.timeit(cpu_run,number=number)gpu_time=timeit.timeit(gpu_run,number=number)print("cpu計(jì)算時(shí)間:",cpu_time)print("Gpu計(jì)算時(shí)間:",gpu_time)print("再次運(yùn)行:")cpu_time=timeit.timeit(cpu_run,number=number)gpu_time=timeit.timeit(gpu_run,number=number)print("cpu計(jì)算時(shí)間:",cpu_time)print("Gpu計(jì)算時(shí)間:",gpu_time)run()
?可能T4顯卡不太好吧...體現(xiàn)不出太大的效果,也可能是GPU在公用或者還沒熱身。
自動(dòng)求導(dǎo):
公式:
f(x)=x^n微分(導(dǎo)數(shù)):
f'(x)=n*x^(n-1)例:
y=x^2
微分(導(dǎo)數(shù)):
dy/dx=2x^(2-1)=2x
x = tf.constant(10.) # 定義常數(shù)變量值
with tf.GradientTape() as tape: #調(diào)用tf底下的求導(dǎo)函數(shù)tape.watch([x]) # 使用tape.watch()去觀察和跟蹤watchy=x**2dy_dx = tape.gradient(y,x)
print(dy_dx)
?運(yùn)行結(jié)果:tf.Tensor(20.0, shape=(), dtype=float32)
?數(shù)據(jù)預(yù)處理 之 統(tǒng)一數(shù)組維度
????????對(duì)拿到的臟數(shù)據(jù)進(jìn)行預(yù)處理的時(shí)候需要進(jìn)行統(tǒng)一數(shù)組維度操作,使用tensorflow.keras.preprocessing.sequence 底下的pad_sequences函數(shù),比如下面有三個(gè)不等長(zhǎng)的數(shù)組,我們需要對(duì)數(shù)據(jù)處理成相同的長(zhǎng)度,可以進(jìn)行左邊或者補(bǔ)個(gè)數(shù)
import numpy as np
import pprint as pp #讓打印出來的更加好看
from tensorflow.keras.preprocessing.sequence import pad_sequencescomment1 = [1,2,3,4]
comment2 = [1,2,3,4,5,6,7]
comment3 = [1,2,3,4,5,6,7,8,9,10]x_train = np.array([comment1, comment2, comment3], dtype=object)
print(), pp.pprint(x_train)# 左補(bǔ)0,統(tǒng)一數(shù)組長(zhǎng)度
x_test = pad_sequences(x_train)
print(), pp.pprint(x_test)# 左補(bǔ)255,統(tǒng)一數(shù)組長(zhǎng)度
x_test = pad_sequences(x_train, value=255)
print(), pp.pprint(x_test)# 右補(bǔ)0,統(tǒng)一數(shù)組長(zhǎng)度
x_test = pad_sequences(x_train, padding="post")
print(), pp.pprint(x_test)# 切取數(shù)組長(zhǎng)度, 只保留后3位
x_test = pad_sequences(x_train, maxlen=3)
print(), pp.pprint(x_test)# 切取數(shù)組長(zhǎng)度, 只保留前3位
x_test = pad_sequences(x_train, maxlen=3, truncating="post")
print(), pp.pprint(x_test)
array([list([1, 2, 3, 4]), list([1, 2, 3, 4, 5, 6, 7]),list([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])], dtype=object)array([[ 0, 0, 0, 0, 0, 0, 1, 2, 3, 4],[ 0, 0, 0, 1, 2, 3, 4, 5, 6, 7],[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=int32)array([[255, 255, 255, 255, 255, 255, 1, 2, 3, 4],[255, 255, 255, 1, 2, 3, 4, 5, 6, 7],[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=int32)array([[ 1, 2, 3, 4, 0, 0, 0, 0, 0, 0],[ 1, 2, 3, 4, 5, 6, 7, 0, 0, 0],[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=int32)array([[ 2, 3, 4],[ 5, 6, 7],[ 8, 9, 10]], dtype=int32)array([[1, 2, 3],[1, 2, 3],[1, 2, 3]], dtype=int32)(None, None)
?定義變量和常量
tf中變量定義為Variable,常量Tensor(這里懂了吧,pytorch里面都是Tensor,但是tf里面的Tensor代表向量其實(shí)也是可變的),要注意的是Variable數(shù)組和變量數(shù)值之間的加減乘除可以進(jìn)行廣播機(jī)制的運(yùn)算,而且常量和變量之間也是可以相加的。
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.system("cls")import tensorflow as tf################################
# 定義變量
a = tf.Variable(1)
b = tf.Variable(1.)
c = tf.Variable([1.])
d = tf.Variable(1., dtype=tf.float32)print("-" * 40)
print(a)
print(b)
print(c)
print(d)# print(a+b) # error:類型不匹配
print(b+c) # 注意這里是Tensor類型
print(b+c[0]) # 注意這里是Tensor類型################################
# 定義Tensor
x1 = tf.constant(1)
x2 = tf.constant(1.)
x3 = tf.constant([1.])
x4 = tf.constant(1, dtype=tf.float32)print("-" * 40)
print(x1)
print(x2)
print(x3)
print(x4)print(x2+x3[0])
運(yùn)行結(jié)果:
----------------------------------------
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=1> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> <tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([1.], dtype=float32)> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> tf.Tensor([2.], shape=(1,), dtype=float32) tf.Tensor(2.0, shape=(), dtype=float32)
----------------------------------------
tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor([1.], shape=(1,), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor(2.0, shape=(), dtype=float32)
?訓(xùn)練模型的時(shí)候設(shè)備變量的設(shè)置
使用Variable:
????????如果定義整數(shù)默認(rèn)定義在CPU,定義浮點(diǎn)數(shù)默認(rèn)在GPU上,但是咱們?cè)趖f2.0上不用去關(guān)心他的變量類型,因?yàn)?.0進(jìn)行運(yùn)算的變量都在GPU上進(jìn)行運(yùn)算(前提上本地有GPU).
? ? ? ? 使用identity指定變量所定義的設(shè)備,在2.0其實(shí)不用管了,1.0可能代碼得有兩個(gè)不同設(shè)備的版本,但在2.0就不需要在意這個(gè)問題了。
################################
# 定義變量后看設(shè)備
a = tf.Variable(1)
b = tf.Variable(10.)print("-" * 40)
print("a.device:", a.device, a) # CPU
print("b.device:", b.device, b) # GPU################################
# 定義Tensor后看設(shè)備
x1 = tf.constant(100)
x2 = tf.constant(1000.)print("-" * 40)
print("x1.device:", x1.device, x1) # CPU
print("x2.device:", x2.device, x2) # CPU################################
print("-" * 40)# CPU+CPU
ax1 = a + x1
print("ax1.device:", ax1.device, ax1) # GPU# CPU+GPU
bx2 = b + x2
print("bx2.device:", bx2.device, bx2) # GPU################################
# 指定GPU設(shè)備定義Tensor
gpu_a = tf.identity(a)
gpu_x1 = tf.identity(x1)print("-" * 40)
print("gpu_a.device:", gpu_a.device, gpu_a)
print("gpu_x1.device:", gpu_x1.device, gpu_x1)
生成隨機(jī)數(shù)據(jù)
其實(shí)tf和numpy在創(chuàng)建上是大同小異的,除了變量類型不一樣。
a = np.ones(12)
print(a)
a = tf.convert_to_tensor(a)#其實(shí)沒必要轉(zhuǎn)換,直接像下面的方法進(jìn)行定義。
a = tf.zeros(12)
a = tf.zeros([4,3])
a = tf.zeros([4,6,3])
b = tf.zeros_like(a)
a = tf.ones(12)
a = tf.ones_like(b)
a = tf.fill([3,2], 10.)
a = tf.random.normal([12])
a = tf.random.normal([4,3])
a = tf.random.truncated_normal([3,2])
a = tf.random.uniform([4,3], minval=0, maxval=10)
a = tf.random.uniform([12], minval=0, maxval=10, dtype=tf.int32)
a = tf.range([12], dtype=tf.int32)
b = tf.random.shuffle(a)
print(b)
代碼我就不貼了。
交叉熵?fù)p失CE和均方誤差函數(shù)MSE?
假設(shè)batch=1
直接看怎么用,以圖像分類為例,輸出是類別個(gè)數(shù),選擇最大神經(jīng)原的下標(biāo),然后進(jìn)行獨(dú)熱編碼把它變成[1,0,0,0,...],然后就可以與softmax之后的輸出概率值之間做交叉熵?fù)p失。
rows = 1
out = tf.nn.softmax(tf.random.uniform([rows,2]),axis=1)
print("out:", out)
print("預(yù)測(cè)值:", tf.math.argmax(out, axis=1), "\n")y = tf.range(rows)
print("y:", y, "\n")y = tf.one_hot(y, depth=10)
print("y_one_hot:", y, "\n")loss = tf.keras.losses.binary_crossentropy(y,out)
# loss = tf.keras.losses.mse(y, out)
print("row loss", loss, "\n")
假設(shè)batch=2
rows = 2
out = tf.random.uniform([rows,1])
print("預(yù)測(cè)值:", out, "\n")y = tf.constant([1])
print("y:", y, "\n")# y = tf.one_hot(y, depth=1)print("y_one_hot:", y, "\n")loss = tf.keras.losses.mse(y,out)
# loss = tf.keras.losses.mse(y, out)
print("row loss", loss, "\n")loss = tf.reduce_mean(loss)
print("總體損失:", loss, "\n")
總損失就是一個(gè)batch的損失求均值。
全連接Dense層
###################################################
# Dense: y=wx+b
rows = 1
net = tf.keras.layers.Dense(1) # 一個(gè)隱藏層,一個(gè)神經(jīng)元
net.build((rows, 1)) # (編譯)每個(gè)訓(xùn)練數(shù)據(jù)有1個(gè)特征
print("net.w:", net.kernel) # 參數(shù)個(gè)數(shù)
print("net.b:", net.bias) # 和Dense數(shù)一樣
假設(shè)有一個(gè)特征輸出,如果講bulid參數(shù)改成(rows,3),那么神經(jīng)元個(gè)數(shù)的w參數(shù)輸出就變成了(3,1)大小的數(shù)據(jù)。
維度變換reshape
跟numpy一毛一樣不用看了
# 10張彩色圖片
a = tf.random.normal([10,28,28,3])
print(a)
print(a.shape) # 形狀
print(a.ndim) # 維度b = tf.reshape(a, [10, 784, 3])
print(b)
print(b.shape) # 形狀
print(b.ndim) # 維度c = tf.reshape(a, [10, -1, 3])
print(c)
print(c.shape) # 形狀
print(c.ndim) # 維度d = tf.reshape(a, [10, 784*3])
print(d)
print(d.shape) # 形狀
print(d.ndim) # 維度e = tf.reshape(a, [10, -1])
print(e)
print(e.shape) # 形狀
print(e.ndim) # 維度
增加或減小維度
a = tf.range([24])
# a = tf.reshape(a, [4,6])
print(a)
print(a.shape)
print(a.ndim)# 增加一個(gè)維度,相當(dāng)于[1,2,3]->[[1,2,3]]
b = tf.expand_dims(a, axis=0)
print(b)
print(b.shape)
print(b.ndim)# 減少維度,相當(dāng)于[[1,2,3]]->[1,2,3]
c = tf.squeeze(b, axis=0)
print(c)
print(c.shape)
print(c.ndim)
數(shù)組合并
真t和numpy一毛一樣
####################################################
# 數(shù)組合并
# tf.concat
a = tf.zeros([2,4,3])
b = tf.ones([2,4,3])print(a)
print(b)# 0軸合并,4,4,3
c = tf.concat([a,b], axis=0)
print(c)# 1軸合并,2,8,3
c = tf.concat([a,b], axis=1)
print(c)# 2軸合并,2,4,6
c = tf.concat([a,b], axis=2)
print(c)# 擴(kuò)充一維,例如把多個(gè)圖片放入一個(gè)大數(shù)組中 -> 2,2,4,3
c = tf.stack([a,b], axis=0)
print(c)# 降低維數(shù),拆分?jǐn)?shù)組
m, n = tf.unstack(c, axis=0)
print(m)
print(n)
廣播機(jī)制:
a = tf.constant([1, 2, 3])
print(a)x = 1
print(a + x)b = tf.broadcast_to(a, [3, 3])
print(b)x = 10
print(b * x)
運(yùn)行結(jié)果:
tf.Tensor([1 2 3], shape=(3,), dtype=int32)
tf.Tensor([2 3 4], shape=(3,), dtype=int32)
tf.Tensor( [[1 2 3] [1 2 3] [1 2 3]], shape=(3, 3), dtype=int32)
tf.Tensor( [[10 20 30] [10 20 30] [10 20 30]], shape=(3, 3), dtype=int32)
簡(jiǎn)單范數(shù)運(yùn)算
def log(prefix="", val=""):print(prefix, val, "\n")# 2范數(shù):平方和開根號(hào)
a = tf.fill([1,2], value=2.)
log("a:", a)
b = tf.norm(a) # 計(jì)算a的范數(shù)
log("a的2范數(shù)b:", b)# 計(jì)算驗(yàn)證
a = tf.square(a)
log("a的平方:", a)a = tf.reduce_sum(a)
log("a平方后的和:", a)b = tf.sqrt(a)
log("a平方和后開根號(hào):", b)# a = tf.range(10, dtype=tf.float32)
?矩陣轉(zhuǎn)置
#####################################################
# 矩陣轉(zhuǎn)置
a = tf.range([12])
a = tf.reshape(a, [4,3])
print(a)b = tf.transpose(a) # 行列交換
print(b)# 1張4x4像素的彩色圖片
a = tf.random.uniform([4,4,3], minval=0, maxval=10, dtype=tf.int32)
print(a)# 指定變換的軸索引
b = tf.transpose(a, perm=[0,2,1])
print(b)# 把剛才的b再變換回來
c = tf.transpose(b, perm=[0,2,1])
print(c)
今天先敲到這里,這里推薦兩個(gè)TensorFlow學(xué)習(xí)教程:
? ? ? ? [1]TensorFlow2.0官方教程https://www.tensorflow.org/tutorials/quickstart/beginner?hl=zh-cn
? ? ? ? [2]小馬哥