做國(guó)外網(wǎng)站用什么顏色十大接單平臺(tái)
實(shí)現(xiàn)功能
- 在
ndarray.py
文件中完成一些python array操作- 我們實(shí)現(xiàn)的NDArray底層存儲(chǔ)就是一個(gè)一維向量,只不過(guò)會(huì)有一些額外的屬性(如shape、strides)來(lái)表明這個(gè)flat array在維度上的分布。底層運(yùn)算(如加法、矩陣乘法)都是用C++寫的,以便達(dá)到理論上的最高速度。但像轉(zhuǎn)置、broadcasting、矩陣求子陣等操作都可以在python框架里通過(guò)簡(jiǎn)單的調(diào)整array的屬性(如strides)來(lái)實(shí)現(xiàn)
- 實(shí)現(xiàn)的python array功能:這四個(gè)功能都不需要重新分配內(nèi)存,而是在原array的基礎(chǔ)上修改shape/strides/etc來(lái)實(shí)現(xiàn)一些變形(除了第四個(gè)__getitem__(),它并不修改array的形狀)
- reshape(new_shape):調(diào)用as_strided(new_shape, strides),再調(diào)用make(new_shape, strides, device=self.device, handle=self._handle)。(其中as_strided會(huì)額外檢查一下shape的長(zhǎng)度是否等于strides的長(zhǎng)度,strides的數(shù)值是通過(guò)self.compact_strides(new_shape)算出來(lái)的)
- permute(new_axes):調(diào)換array的維度順序。按照new_axes的順序調(diào)換self.strides以及self.shape得到new_shape和new_strides,最后再調(diào)用self.as_strided"(new_shape, new_strides)
- broadcast_to():對(duì)于原shape中為1的維度,將其步幅設(shè)為0,然后調(diào)用as_strided(new_shape, new_strides),然后as_strided調(diào)用make(new_shape, new_strides, device=self.device, handle=self._handle)。即重新make,修改其中的shape和stride,但內(nèi)存位置不變(即handle不變)。至于這里將stride部分設(shè)為0,猜測(cè)更底層有代碼處理這里,但我目前還沒(méi)找到
- __getitem__(idxs):這是一個(gè)非常好的理解我們所實(shí)現(xiàn)的NDArray的例子,前面也說(shuō)了,底層的array存儲(chǔ)就是一個(gè)一維向量,是通過(guò)shape、stride、offset這些屬性來(lái)指示該向量的維度的。本例就是在原array的基礎(chǔ)上求子矩陣,且不改變其內(nèi)存位置,就是通過(guò)一系列計(jì)算得到新的new_offset、new_shape、new_stride,就可以將原本的大矩陣重新解釋成一個(gè)小矩陣。圖示如下
而這里也天然解釋了什么是緊湊型(compact)矩陣和非緊湊型矩陣:上述原矩陣就是緊湊型的,切片后形成的藍(lán)底矩陣就是非緊湊型的。
而這也帶出了下一個(gè)問(wèn)題:如何選擇原矩陣的一個(gè)子矩陣進(jìn)行修改內(nèi)容。因?yàn)楸菊n程中幾乎所有setitem操作都會(huì)先調(diào)用.compact(),而這會(huì)導(dǎo)致從原矩陣中copy子矩陣到一個(gè)新的內(nèi)存空間,這顯然不是__setitem__()所期望的,因此要大動(dòng)干戈。
- 【CPU】compact和setitem
其實(shí)這里的實(shí)現(xiàn)主要就是根據(jù)這個(gè)原理:out[cnt++] = in[strides[0]*i + strides[1]*j + strides[2]*k];
(這里是按三維矩陣算的,下面的代碼通過(guò)while循環(huán)可以擴(kuò)展到其他維度)- Compact:就是將原來(lái)的非緊湊型矩陣a變成緊湊型矩陣out,具體是根據(jù)輸入的shape、strides、offset(以a為主體確定的),將a中的子矩陣提出來(lái),將各位置的值賦給另一個(gè)矩陣out。下面的mode是INDEX_IN,即調(diào)用
_strided_index_setter(&a, out, shape, strides, offset, INDEX_IN);
// 從后往前每一個(gè)維度,都根據(jù)該維度的步長(zhǎng)、offset以及循環(huán)次數(shù)來(lái)確定本次要copy矩陣a的哪一個(gè)元素 // 其實(shí)就是根據(jù)strides、offset來(lái)確定一維下的index void _strided_index_setter(const AlignedArray* a, AlignedArray* out, std::vector<uint32_t> shape,std::vector<uint32_t> strides, size_t offset, strided_index_mode mode, int val=-1) {int depth = shape.size();std::vector<uint32_t> loop(depth, 0);int cnt = 0;while (true) {// inner loopint index = offset;for (int i = 0; i < depth; i++) {index += strides[i] * loop[i];}switch (mode) {case INDEX_OUT: out->ptr[index] = a->ptr[cnt++]; break;case INDEX_IN: out->ptr[cnt++] = a->ptr[index]; break;case SET_VAL: out->ptr[index] = val; break;}// incrementloop[depth - 1]++;// carryint idx = depth - 1;while (loop[idx] == shape[idx]) {if (idx == 0) {// overflowreturn;}loop[idx--] = 0;loop[idx]++;}} }
- EwiseSetitem(a, out, shape, strides, offset):這里a是緊湊型矩陣,out是一個(gè)非緊湊行矩陣,需要將a的各元素的值賦給out的指定位置上(根據(jù)shape、strides、offset確定)。只需要復(fù)用上面的代碼,將mode改為INDEX_OUT,即調(diào)用
_strided_index_setter(&a, out, shape, strides, offset, INDEX_OUT);
- ScalarSetitem(size, val, out, shape, strides, offset):out是一個(gè)非緊湊型子矩陣,根據(jù)shape、strides、offset在out的對(duì)應(yīng)位置將值寫為val,即調(diào)用
_strided_index_setter(nullptr, out, shape, strides, offset, SET_VAL, val);
- Compact:就是將原來(lái)的非緊湊型矩陣a變成緊湊型矩陣out,具體是根據(jù)輸入的shape、strides、offset(以a為主體確定的),將a中的子矩陣提出來(lái),將各位置的值賦給另一個(gè)矩陣out。下面的mode是INDEX_IN,即調(diào)用
- 【CPU】Elementwise和scalar操作
這個(gè)就非常簡(jiǎn)單了,根據(jù)矩陣的size屬性遍歷其每個(gè)元素即可 - 【CPU】Reductions
這個(gè)也很簡(jiǎn)單,條件是矩陣的底層存儲(chǔ)都是一維、行優(yōu)先的。這類里有兩個(gè)函數(shù):ReduceSum和ReduceMax,其參數(shù)都是原矩陣a、輸出矩陣out、reduce_size,其中a.size = out.size * reduce_size??创a就懂了- ReduceMax:
void ReduceMax(const AlignedArray& a, AlignedArray* out, size_t reduce_size) {for (int i = 0; i < out->size; i++) {scalar_t max = a.ptr[i * reduce_size];for (int j = 0; j < reduce_size; j++) {max = std::max(max, a.ptr[i * reduce_size + j]);}out->ptr[i] = max;} }
- ReduceSum:
void ReduceSum(const AlignedArray& a, AlignedArray* out, size_t reduce_size) {for (int i = 0; i < out->size; i++) {scalar_t sum = 0;for (int j = 0; j < reduce_size; j++) {sum += a.ptr[i * reduce_size + j];}out->ptr[i] = sum;} }
- ReduceMax:
- 【CPU】矩陣乘
- 樸素矩陣乘
Matmul
:void Matmul(const AlignedArray& a, const AlignedArray& b, AlignedArray* out, uint32_t m, uint32_t n,uint32_t p) {for (int i = 0; i < m; i++) {for (int j = 0; j < p; j++) {out->ptr[i * p + j] = 0;for (int k = 0; k < n; k++) {out->ptr[i * p + j] += a.ptr[i * n + k] * b.ptr[k * p + j];}}} }
MatmulTiled
和AlignedDot
:前者負(fù)責(zé)根據(jù)分片大小計(jì)算目前參與運(yùn)算的是a、b、out的哪些元素(哪部分block);后者根據(jù)前者傳進(jìn)來(lái)的block進(jìn)行計(jì)算(a分片與b分片進(jìn)行矩陣乘法得到out分片void MatmulTiled(const AlignedArray& a, const AlignedArray& b, AlignedArray* out, uint32_t m,uint32_t n, uint32_t p) {for (int i = 0; i < m * p; i++) out->ptr[i] = 0;for (int i = 0; i < m / TILE; i++) {for (int j = 0; j < p / TILE; j++) {for (int k = 0; k < n / TILE; k++) {AlignedDot(&a.ptr[i * n * TILE + k * TILE * TILE], &b.ptr[k * p * TILE + j * TILE * TILE], &out->ptr[i * p * TILE + j * TILE * TILE]);}}} }
過(guò)程原理圖:inline void AlignedDot(const float* __restrict__ a,const float* __restrict__ b,float* __restrict__ out) {a = (const float*)__builtin_assume_aligned(a, TILE * ELEM_SIZE);b = (const float*)__builtin_assume_aligned(b, TILE * ELEM_SIZE);out = (float*)__builtin_assume_aligned(out, TILE * ELEM_SIZE);for (int i = 0; i < TILE; i++) {for (int j = 0; j < TILE; j++) {for (int k = 0; k < TILE; k++) {out[i * TILE + j] += a[i * TILE + k] * b[k * TILE + j];}}} }
這里有一個(gè)疑問(wèn),a、b雖然每次通過(guò)MatmulTiled計(jì)算的分片的起始位置是正確的,但是如何保證連續(xù)的Tile*Tile分塊元素就是如圖所示那樣的呢。按理說(shuō)a、b的底層存儲(chǔ)應(yīng)該是連續(xù)的行優(yōu)先,那(按照tile=2算)a[2]、a[3]、b[2]、b[3]應(yīng)該是按行走下去而不是取下一行啊。待解答…
- 樸素矩陣乘
- 【CUDA】compact和setitem
- compact:
圖示如下:__device__ size_t index_transform(size_t index, CudaVec shape, CudaVec strides, size_t offset) {size_t idxs[MAX_VEC_SIZE];size_t cur_size, pre_size = 1;// 將給定的線性索引映射回多維數(shù)組的索引,即計(jì)算index值在各維度上對(duì)應(yīng)是第幾個(gè)// 思路就是從后往前遍歷shape,cur_size表示當(dāng)前處理的維度由幾個(gè)元素構(gòu)成// index%cur_size/pre_size就表示在當(dāng)前維度的第幾個(gè)分量// index%cur_size表示是當(dāng)前維度(只看當(dāng)前維)的第幾個(gè)元素,再/pre_size就表示是當(dāng)前維度的第幾塊for (int i = shape.size - 1; i >= 0; i--) {cur_size = pre_size * shape.data[i]; idxs[i] = index % cur_size / pre_size;pre_size = cur_size;}// 根據(jù)上述算好的多維數(shù)組索引,計(jì)算在原非緊湊型矩陣中的線性索引size_t comp_idx = offset;for (int i = 0; i < shape.size; i++) comp_idx += idxs[i] * strides.data[i];return comp_idx; }__global__ void CompactKernel(const scalar_t* a, scalar_t* out, size_t size, CudaVec shape,CudaVec strides, size_t offset) {size_t gid = blockIdx.x * blockDim.x + threadIdx.x;/// BEGIN YOUR SOLUTIONif (gid < size)out[gid] = a[index_transform(gid, shape, strides, offset)];/// END YOUR SOLUTION }void Compact(const CudaArray& a, CudaArray* out, std::vector<uint32_t> shape,std::vector<uint32_t> strides, size_t offset) {CudaDims dim = CudaOneDim(out->size);CompactKernel<<<dim.grid, dim.block>>>(a.ptr, out->ptr, out->size, VecToCuda(shape),VecToCuda(strides), offset); }
- EWiseSetitem:
__global__ void EwiseSetitemKernel(const scalar_t* a, scalar_t* out, size_t size, CudaVec shape,CudaVec strides, size_t offset) {size_t gid = blockIdx.x * blockDim.x + threadIdx.x;if (gid < size)out[index_transform(gid, shape, strides, offset)] = a[gid]; }void EwiseSetitem(const CudaArray& a, CudaArray* out, std::vector<uint32_t> shape,std::vector<uint32_t> strides, size_t offset) {/// BEGIN YOUR SOLUTIONCudaDims dim = CudaOneDim(out->size);EwiseSetitemKernel<<<dim.grid, dim.block>>>(a.ptr, out->ptr, a.size, VecToCuda(shape),VecToCuda(strides), offset);/// END YOUR SOLUTION }
- ScalarSetitem:
__global__ void ScalarSetitemKernel(size_t size, scalar_t val, scalar_t* out, CudaVec shape, CudaVec strides, size_t offset) {size_t gid = blockIdx.x * blockDim.x + threadIdx.x;if (gid < size)out[index_transform(gid, shape, strides, offset)] = val; }void ScalarSetitem(size_t size, scalar_t val, CudaArray* out, std::vector<uint32_t> shape,std::vector<uint32_t> strides, size_t offset) {/// BEGIN YOUR SOLUTIONCudaDims dim = CudaOneDim(out->size);ScalarSetitemKernel<<<dim.grid, dim.block>>>(size, val, out->ptr, VecToCuda(shape),VecToCuda(strides), offset);/// END YOUR SOLUTION }
- 其實(shí)總結(jié)一下,CPU版的將緊湊小矩陣的index轉(zhuǎn)換成非緊湊大矩陣的index,是在一個(gè)loop中實(shí)現(xiàn)的,并在loop中找到index后就完成了copy工作;對(duì)于GPU版來(lái)說(shuō),是將copy工作分配給各個(gè)線程,因此若要讓每個(gè)線程都能正確copy,還需要每個(gè)線程根據(jù)自己分配到的緊湊小矩陣的index,計(jì)算得到非緊湊大矩陣的index。即每個(gè)線程完成CPU版中一個(gè)loop的操作(但不需要后續(xù)的檢查)。
- compact:
- 【CUDA】Elementwise和scalar操作
和CPU版本的一樣,也很簡(jiǎn)單。且這part非常能體現(xiàn)CUDA的并行、高效特性。舉個(gè)例子:__global__ void EwiseMulKernel(const scalar_t* a, const scalar_t* b, scalar_t* out, size_t size){size_t gid = blockInx.x * blockDim.x + threadIdx.x;if (gid<size){out[gid] = a[gid] * b[gid];} } void EwiseMul(const CudaArray &a, const CudaArray &b, CudaArray* out){CudaDims dim = CudaOneDim(out->size);EwiseMulKernel<<<dim.grid, dim.block>>>(a.ptr, b.ptr, out->ptr, out->size); }
- 【CUDA】Reductions
- ReduceMax
__global__ void ReduceMaxKernel(const scalar_t* a, scalar_t* out, size_t reduce_size, size_t size){size_t gid = blockIdx.x * blockDim.x + threadIdx.x;if (gid<size){size_t offset = gid * reduce_size;scalar_t reduce_max = a[offset];for (int i=1; i<reduce_size; i++){reduce_max = max(reduce_max, a[offset+i]);}out[gid] = reduce_max;} }void ReduceMax(const CudaArray& a, CudaArray* out, size_t reduce_size) {CudaDims dim = CudaOneDim(out->size);ReduceMaxKernel<<<dim.grid, dim.block>>>(a.ptr, out->ptr, reduce_size, out->size); }
- ReduceSum
- ReduceMax
- 【CUDA】矩陣乘
使用樸素矩陣乘即可,一個(gè)線程負(fù)責(zé)out矩陣的一個(gè)元素
線程分布如下:__global__ void MatmulKernel(const scalar_t* a, const scalar_t* b, scalar_t* out, uint32_t M,uint32_t N, uint32_t P) {size_t i = blockIdx.x * blockDim.x + threadIdx.x; size_t j = blockIdx.y * blockDim.y + threadIdx.y;if (i < M && j < P) {out[i * P + j] = 0;for (int k = 0; k < N; k++) {out[i * P + j] += a[i * N + k] * b[k * P + j];}} } void Matmul(const CudaArray& a, const CudaArray& b, CudaArray* out, uint32_t M, uint32_t N,uint32_t P) {dim3 grid(BASE_THREAD_NUM, BASE_THREAD_NUM, 1);dim3 block((M + BASE_THREAD_NUM - 1) / BASE_THREAD_NUM, (P + BASE_THREAD_NUM - 1) / BASE_THREAD_NUM, 1);MatmulKernel<<<grid, block>>>(a.ptr, b.ptr, out->ptr, M, N, P); }
補(bǔ)充知識(shí)
一、CUDA相關(guān)代碼
如上圖的結(jié)構(gòu),首先定義Cuda的維度:
struct CudaDims {dim3 block, grid;
};
根據(jù)數(shù)據(jù)數(shù)量來(lái)判定需要多少個(gè)block、多少個(gè)thread(都是一個(gè)維度下的)
CudaDims CudaOneDim(size_t size) {/*** Utility function to get cuda dimensions for 1D call*/CudaDims dim;size_t num_blocks = (size + BASE_THREAD_NUM - 1) / BASE_THREAD_NUM;dim.block = dim3(BASE_THREAD_NUM, 1, 1); // 一個(gè)block里的線程dim.grid = dim3(num_blocks, 1, 1); // 一個(gè)grid里的blockreturn dim;
}