深圳羅湖網(wǎng)站設(shè)計(jì)公司價(jià)格seo具體是什么
????????
????????我在windows 環(huán)境下本地運(yùn)行whisper 模型,使用的是nivdia RTX4070 顯卡,結(jié)果發(fā)現(xiàn)GPU 的利用率只有2% 。使用
import torch
print(torch.cuda.is_available())
返回TRUE。表示我的cuda 是可用的。
最后在github 的下列網(wǎng)頁上找到了問題
極低的 GPU 利用率 #140
最關(guān)鍵的是
1 .運(yùn)行之前,清除GPU 緩存
torch.cuda.empty_cache()
?2 使用小的whisper 模型,我使用
model =load_model("base").to("cuda")
3 最關(guān)鍵的是 在model.transcribe的參數(shù)中設(shè)置? ?beam_size = 5,一下子GPU 的利用率到了20%,當(dāng)beam_size = 8 時(shí),GPU 利用率可達(dá)30%左右。
model.transcribe(arr,language="en", prompt=prompt,fp16 =False,beam_size = 8,verbose =True,condition_on_previous_text =False)["text"]
下面是我完整的測試程序
import os
import sys
import os.path
import openai
#from dotenv import load_dotenv
import torch
#import whisper
from whisper import load_model
import numpy as np
#from pyannote.audio import Pipeline
from pydub import AudioSegment
#os.environ['OPENAI_API_KEY'] ="sk-ZqGx7uD7sHMyITyIrxFDjbvVEAi84izUGGRwN23N9NbnqTbL"
#os.environ['OPENAI_BASE_URL'] ="https://api.chatanywhere.tech/v1"
print(torch.cuda.is_available())
torch.cuda.empty_cache()
model =load_model("base").to("cuda")
audio = AudioSegment.from_mp3("daily.mp3") #sys.argv[1]segment_length = 25 * 60
duration = audio.duration_seconds
print('Segment length: %d seconds' % segment_length)
print('Duration: %d seconds' % duration)segment_filename = os.path.basename("daily.mp3") #sys.argv[1]
segment_filename = os.path.splitext(segment_filename)[0]
number_of_segments = int(duration / segment_length)
segment_start = 0
segment_end = segment_length * 1000
enumerate = 1
prompt = ""for i in range(number_of_segments):audio_segment = audio[segment_start:segment_end]exported_file = './tmp/' + segment_filename + '-' + str(enumerate) + '.mp3'audio_segment.export(exported_file, format="mp3")print('Exported segment %d of %d' % (enumerate, number_of_segments))#f = open(exported_file, "rb")#audio_segment = audio[segment_start:segment_end]if audio_segment.frame_rate != 16000: # 16 kHzaudio_segment = audio_segment.set_frame_rate(16000)if audio_segment.sample_width != 2: # int16audio_segment = audio_segment.set_sample_width(2)if audio_segment.channels != 1: # monoaudio_segment = audio_segment.set_channels(1) arr = np.array(audio_segment.get_array_of_samples())arr = arr.astype(np.float32)/32768.0#beam_size = 5非常重要,=8 GPU 利用率30%左右data = model.transcribe(arr,language="en", prompt=prompt,fp16 =False,beam_size = 8,verbose =True,condition_on_previous_text =False)["text"]print('Transcribed segment %d of %d' % (enumerate, number_of_segments))f = open(os.path.join('./transcripts/', segment_filename + '.txt'), "a")f.write(data)f.close()prompt += datasegment_start += segment_length * 1000segment_end += segment_length * 1000enumerate += 1
?beam_size到底是什么意思我并沒有搞清楚
beam size(又名 beam width)控制生成輸出時(shí)每個(gè)步驟中探索的路徑數(shù)。這是個(gè)啥呀?