上海專業(yè)網(wǎng)站建設(shè)公司電話商丘網(wǎng)站seo
筆者使用 Quadro M4000 顯卡用于 LLM 相關(guān)任務(wù),但奈何該卡發(fā)布的年代過于久遠(yuǎn),以至于 LLM 相關(guān)任務(wù)只能使用例如:Phi3 mini、Qwen 2 2B、GLM 4 8B 以及 Gemini v2 2B等小參數(shù)模型,且速度不堪理想,也經(jīng)常因為顯卡過熱降頻導(dǎo)致對話效率低下。
對于家用而言,不會去考慮那些特別新的 Tesla 計算卡,而會考慮一些舊的大顯存平臺,最好是大于10 GB 的顯存,這樣可以跑一些經(jīng)過量化的、參數(shù)量高一些的模型。對于計算相關(guān),推理相關(guān)的更應(yīng)注重FP16
的計算能力,如果有微調(diào)需求,同時也應(yīng)注重FP32
的計算能力。
最近總想著置辦一張計算卡用于 Homelab 的 LLM
應(yīng)用,但是市面上的計算卡/顯卡種類太多了,有的時候不曉得要看哪一張顯卡,故從TechPowerUp
網(wǎng)站摘錄下表,以供參考。
顯卡型號 | Chip | Released | VRAM | Bandwidth | BF16 | FP16 | FP32 | FP64 | TDP (W) |
---|---|---|---|---|---|---|---|---|---|
Quadro M4000 (現(xiàn)役) | GM204 | Jun 29th, 2015 | 8 GB GDDR5 | 192.3 GB/s | Nan | Nan | 2.573 TFlops | 80.39 GFlops | 120 |
Tesla P4 | GP104 | Sep 13th, 2016 | 8GB GDDR5 | 192.3GB/s | Nan | 89.12 GFlops | 5.704 TFlops | 178.2 GFlops | 75 |
Tesla P40 | GP102 | Sep 13th, 2016 | 24GB GDDR5 | 347.1 GB/s | Nan | 183.7 GFlops | 11.76 TFlops | 367.4 GFlops | 250 |
Tesla P100 PCIE | GP100 | Jun 20th, 2016 | 16GB HBM2 | 732.2 GB/s | Nan | 19.05 TFlops | 9.526 TFlops | 4.763 TFlops | 250 |
Tesla P100 SXM2 | GP100 | Apr 5th, 2016 | 16GB HBM2 | 732.2 GB/s | Nan | 21.22 TFlops | 10.61 TFlops | 5.304 TFlops | 300 |
GTX 1080 | GP104 | May 27th, 2016 | 8GB GDDR5X | 320.3 GB/s | Nan | 138.6 GFlops | 8.873 TFlops | 277.3 GFlops | 180 |
RTX 2080 Ti | TU102 | Sep 20th, 2018 | 11GB GDDR6 | 616.0 GB/s | Nan | 26.9 TFlops | 13.45 TFlops | 420.2 GFlops | 250 |
Tesla V100 PCIe | GV100 | Jun 21st, 2017 | 16 GB HBM2 | 897.0 GB/s | Nan | 28.26 TFlops | 14.13 TFlops | 7.066 TFlops | 300 |
Tesla V100 PCIe | GV100 | Mar 27th, 2018 | 32 GB HBM2 | 897.0 GB/s | Nan | 28.26 TFlops | 14.13 TFlops | 7.066 TFlops | 250 |
Tesla T4 | TU104 | Sep 13th, 2018 | 16 GB GDDR6 | 320.0 GB/s | Nan | 65.13 TFlops | 8.141 TFlops | 254.4 GFlops | 70 |
RTX3060 | GA104 | Sep 1st, 2021 | 12GB GDDR6 | 360.0 GB/s | Unknow | 12.74 TFlops | 12.74 TFlops | 199.0 GFlops | 170 |
RTX3060 | GA106 | Jan 12th, 2021 | 12GB GDDR6 | 360.0 GB/s | Unknow | 12.74 TFlops | 12.74 TFlops | 199.0 GFlops | 170 |
RTX3060 Ti | GA104 | Dec 1st, 2020 | 8GB GDDR6 | 448.0 GB/s | Unknow | 16.2 TFlops | 16.2 TFlops | 253.1 GFlops | 200 |
RTX 3080 Ti | GA102 | Jan 2022 | 20GB GDDR6X | 760.3 GB/s | Unknow | 34.1 TFlops | 34.1 TFlops | 532.8 GFlops | 350 |
RTX 3090 | GA102 | Sep 1st, 2020 | 24 GB GDDR6X | 936.2 GB/s | Unknow | 35.58 TFlops | 35.58 TFlops | 556.0 GFlops | 350 |
RTX 3090 Ti | GA102 | Jan 27th, 2022 | 24GB GDDR6X | 1.01 TB/s | Unknow | 40 TFlops | 40 TFlops | 625.0 GFlops | 450 |
A100 PCIe | GA100 | Jun 22nd, 2020 | 40 GB HBM2e | 1.56 TB/s | 311.84 TFlops | 77.97 TFlops | 19.49 TFlops | 9.746 TFlops | 250 |
RTX 4060 | AD107 | May 18th, 2023 | 8 GB GDDR6 | 272.0 GB/s | Unknow | 15.11 TFlops | 15.11 TFlops | 236.2 GFlops | 115 |
RTX 4060 Ti | AD106 | May 18th, 2023 | 16 GB GDDR6 | 288.0 GB/s | Unknow | 22.06 TFlops | 22.06 TFlops | 344.8 GFlops | 165 |
RTX 4070 SUPER | AD104 | Jan 8th, 2024 | 12 GB GDDR6X | 504.2 GB/s | Unknow | 35.48 TFlops | 35.48 TFlops | 554.4 GFlops | 220 |
RTX 4070 Ti SUPER | AD103 | Jan 8th, 2024 | 16 GB GDDR6X | 672.3 GB/s | Unknow | 44.10 TFlops | 44.10 TFlops | 689.0 GFlops | 285 |
RTX 4080 | AD103 | Sep 20th, 2022 | 16 GB GDDR6X | 716.8 GB/s | Unknow | 48.74 TFlops | 48.74 TFlops | 761.5 GFlops | 320 |
RTX 4080 SUPER | AD103 | Jan 8th, 2024 | 16 GB GDDR6X | 736.3 GB/s | Unknow | 52.22 TFlops | 52.22 TFlops | 816.0 GFlops | 320 |
RTX 4090 | AD102 | Sep 20th, 2022 | 24 GB GDDR6X | 1.01 TB/s | Unknow | 82.58 TFlops | 82.58 TFlops | 1,290 GFlops | 450 |
RTX 4090 D | AD102 | Sep 20th, 2022 | 24 GB GDDR6X | 1.01 TB/s | Unknow | 73.54 TFlops | 73.54 TFlops | 1,149 GFlops | 450 |