php做的網(wǎng)站安全嗎今天的新聞頭條
在圖像質(zhì)量評估上,有三個重要指標:PSNR,SSIM,LPIPS。本文提供簡易腳本分別實現(xiàn)。
PSNR,峰值信噪比,是基于MSE的像素比較低質(zhì)量評估,一般30dB以上質(zhì)量就不錯,到40dB以上肉眼就很難分別了。
SSIM,結(jié)構(gòu)相似性,從分布上來比較相似性,量化到(0-1)之間,越接近1則證明圖像質(zhì)量越好。具體數(shù)學公式可以看我之前的博客《SSIM》。
LPIPS,利用AI模型來量化圖像之間的相似性。取值范圍也是[0,1],與SSIM相反,LPIPS是越小則證明圖像質(zhì)量越好。
像這種常見的圖像質(zhì)量評價指標,都會收錄到torchmetrics里面。只需安裝:
pip install torchmetrics
實驗?zāi)_本:
import torch
from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity
from torchmetrics.image import StructuralSimilarityIndexMeasure
from torchmetrics.image import PeakSignalNoiseRatio_ = torch.manual_seed(123)def psnr_torch(img1, img2):mse = ((img1 - img2) ** 2).view(img1.shape[0], -1).mean(1, keepdim=True)return 20 * torch.log10(1.0 / torch.sqrt(mse))def psnr(img1, img2):metric = PeakSignalNoiseRatio()return metric(img1, img2)def ssim(img1, img2):metric = StructuralSimilarityIndexMeasure(data_range=1.0)return metric(img1, img2)def lpips(img1, img2):metric = LearnedPerceptualImagePatchSimilarity(net_type='vgg')return metric(img1, img2)def _main():img1 = torch.rand(1, 3, 100, 100)img2 = torch.rand(1, 3, 100, 100)# PSNRprint("PNSR: ", psnr_torch(img1, img2))print("PNSR1: ", psnr(img1, img2))print("SSIM: ", ssim(img1, img2))print("LPIPS: ", lpips(img1, img2))if __name__ == "__main__":_main()
代碼里給了兩種PSNR實現(xiàn)方法,計算結(jié)果差別不大。歡迎自取~