江蘇揚(yáng)州疫情最新消息哈爾濱百度搜索排名優(yōu)化
Unity 工具 之 Azure 微軟語音合成普通方式和流式獲取音頻數(shù)據(jù)的簡(jiǎn)單整理
目錄
Unity 工具 之 Azure 微軟語音合成普通方式和流式獲取音頻數(shù)據(jù)的簡(jiǎn)單整理
一、簡(jiǎn)單介紹
二、實(shí)現(xiàn)原理
三、注意實(shí)現(xiàn)
四、實(shí)現(xiàn)步驟
?六、關(guān)鍵腳本
附加:
聲音設(shè)置相關(guān)
一、簡(jiǎn)單介紹
Unity 工具類,自己整理的一些游戲開發(fā)可能用到的模塊,單獨(dú)獨(dú)立使用,方便游戲開發(fā)。
本節(jié)介紹,這里在使用微軟的Azure 進(jìn)行語音合成的兩個(gè)方法的做簡(jiǎn)單整理,這里簡(jiǎn)單說明,如果你有更好的方法,歡迎留言交流。
官網(wǎng)注冊(cè):
面向?qū)W生的 Azure - 免費(fèi)帳戶額度 | Microsoft Azure
官網(wǎng)技術(shù)文檔網(wǎng)址:
技術(shù)文檔 | Microsoft Learn
官網(wǎng)的TTS:
文本轉(zhuǎn)語音快速入門 - 語音服務(wù) - Azure Cognitive Services | Microsoft Learn
Azure Unity SDK? 包官網(wǎng):
安裝語音 SDK - Azure Cognitive Services | Microsoft Learn
SDK具體鏈接:
https://aka.ms/csspeech/unitypackage
二、實(shí)現(xiàn)原理
1、官網(wǎng)申請(qǐng)得到語音合成對(duì)應(yīng)的 SPEECH_KEY 和 SPEECH_REGION
2、然后對(duì)應(yīng)設(shè)置 語言 和需要的聲音 配置
3、使用 普通方式 和 流式獲取得到音頻數(shù)據(jù),在聲源中播放即可
三、注意實(shí)現(xiàn)
1、在合成語音文本較長(zhǎng)的情況下,流式獲取的速度明顯會(huì)優(yōu)于普通的方式
2、目前流式獲取的方式,我是暫時(shí)沒有好的方式管理網(wǎng)絡(luò)錯(cuò)誤和音頻播放結(jié)束的事件
(如果有兄弟集美們知道,還請(qǐng)留言賜教哈)
四、實(shí)現(xiàn)步驟
1、下載好SDK 導(dǎo)入
2、簡(jiǎn)單的搭建場(chǎng)景
3、寫測(cè)試腳本,和普通獲取和流式獲取方式
4、把測(cè)試腳本添加到場(chǎng)景中,并賦值
?5、運(yùn)行,輸入文字,點(diǎn)擊對(duì)應(yīng)按鈕即可
?六、關(guān)鍵腳本
1、Test
using UnityEngine;
using UnityEngine.UI;public class Test : MonoBehaviour
{public InputField m_InputField;public Button m_StreamButton;public Button m_NormalButton;public AudioSource m_AudioSource;// Start is called before the first frame updatevoid Start(){m_StreamButton.onClick.AddListener(() => {AzureTTSStream.Instance.StartTTS(m_InputField.text, m_AudioSource);});m_NormalButton.onClick.AddListener(() => {AzureTTSNormal.Instance.StartTTS(m_InputField.text, m_AudioSource);});}}
2、AzureTTSNormal
using Microsoft.CognitiveServices.Speech;
using System;
using System.Collections;
using UnityEngine;public class AzureTTSNormal : MonoSingleton<AzureTTSNormal>
{private AudioSource m_AudioSource;private string m_SubscriptionKey = "Your";private string m_Region = "Your";private string m_SpeechSynthesisLanguage = "zh-CN";private string m_SpeechSynthesisVoiceName = "zh-CN-XiaochenNeural";private Coroutine m_TTSCoroutine;/// <summary>/// 你的授權(quán)/// </summary>/// <param name="subscriptionKey">子腳本的Key</param>/// <param name="region">地區(qū)</param>public void SetAzureAuthorization(string subscriptionKey, string region){m_SubscriptionKey = subscriptionKey;m_Region = region;}/// <summary>/// 設(shè)置語音和聲音/// </summary>/// <param name="language">語言</param>/// <param name="voiceName">聲音</param>public void SetLanguageVoiceName(SpeechSynthesisLanguage language, SpeechSynthesisVoiceName voiceName){m_SpeechSynthesisLanguage = language.ToString().Replace('_', '-');m_SpeechSynthesisVoiceName = voiceName.ToString().Replace('_', '-');}/// <summary>/// 設(shè)置音源/// </summary>/// <param name="audioSource"></param>public void SetAudioSource(AudioSource audioSource){m_AudioSource = audioSource;}/// <summary>/// 開始TTS/// </summary>/// <param name="spkMsg"></param>/// <param name="errorAction"></param>public void StartTTS(string spkMsg, Action<string> errorAction = null){StopTTS();m_TTSCoroutine = StartCoroutine(SynthesizeAudioCoroutine(spkMsg, errorAction));}/// <summary>/// 開始TTS/// </summary>/// <param name="spkMsg"></param>/// <param name="audioSource"></param>/// <param name="errorAction"></param>public void StartTTS(string spkMsg, AudioSource audioSource, Action<string> errorAction = null){SetAudioSource(audioSource);StartTTS(spkMsg, errorAction);}/// <summary>/// 暫停TTS/// </summary>public void StopTTS(){if (m_TTSCoroutine != null){StopCoroutine(m_TTSCoroutine);m_TTSCoroutine = null;}if (m_AudioSource != null){m_AudioSource.Stop();m_AudioSource.clip = null;}}public IEnumerator SynthesizeAudioCoroutine(string spkMsg, Action<string> errorAction){yield return null;var config = SpeechConfig.FromSubscription(m_SubscriptionKey, m_Region);config.SpeechSynthesisLanguage = m_SpeechSynthesisLanguage;config.SpeechSynthesisVoiceName = m_SpeechSynthesisVoiceName;// Creates a speech synthesizer.// Make sure to dispose the synthesizer after use!using (var synthsizer = new SpeechSynthesizer(config, null)){// Starts speech synthesis, and returns after a single utterance is synthesized.var result = synthsizer.SpeakTextAsync(spkMsg).Result;//print("after " + DateTime.Now);// Checks result.string newMessage = string.Empty;if (result.Reason == ResultReason.SynthesizingAudioCompleted){// Since native playback is not yet supported on Unity yet (currently only supported on Windows/Linux Desktop),// use the Unity API to play audio here as a short term solution.// Native playback support will be added in the future release.var sampleCount = result.AudioData.Length / 2;var audioData = new float[sampleCount];for (var i = 0; i < sampleCount; ++i){audioData[i] = (short)(result.AudioData[i * 2 + 1] << 8 | result.AudioData[i * 2]) / 32768.0F;}// The default output audio format is 16K 16bit monovar audioClip = AudioClip.Create("SynthesizedAudio", sampleCount, 1, 16000, false);audioClip.SetData(audioData, 0);m_AudioSource.clip = audioClip;Debug.Log(" audioClip.length " + audioClip.length);m_AudioSource.Play();}else if (result.Reason == ResultReason.Canceled){var cancellation = SpeechSynthesisCancellationDetails.FromResult(result);newMessage = $"CANCELED:\nReason=[{cancellation.Reason}]\nErrorDetails=[{cancellation.ErrorDetails}]\nDid you update the subscription info?";Debug.Log(" newMessage "+ newMessage);if (errorAction!=null) { errorAction.Invoke(newMessage); }}}}
}
3、AzureTTSStream
using UnityEngine;
using Microsoft.CognitiveServices.Speech;
using System.IO;
using System;
using System.Collections;public class AzureTTSStream : MonoSingleton<AzureTTSStream>
{private AudioSource m_AudioSource;private string m_SubscriptionKey = "Your";private string m_Region = "Your";private string m_SpeechSynthesisLanguage = "zh-CN";private string m_SpeechSynthesisVoiceName = "zh-CN-XiaochenNeural";public const int m_SampleRate = 16000;public const int m_BufferSize = m_SampleRate * 60; //最大支持60s音頻,但是也可以調(diào)大,流式的無所謂public const int m_UpdateSize = m_SampleRate / 10; //采樣容量,越大越卡private Coroutine m_TTSCoroutine;private int m_DataIndex = 0;private AudioDataStream m_AudioDataStream;private void OnEnable(){StopTTS();}private void OnDisable(){StopTTS();}/// <summary>/// 你的授權(quán)/// </summary>/// <param name="subscriptionKey">子腳本的Key</param>/// <param name="region">地區(qū)</param>public void SetAzureAuthorization(string subscriptionKey, string region){m_SubscriptionKey = subscriptionKey;m_Region = region;}/// <summary>/// 設(shè)置語音和聲音/// </summary>/// <param name="language">語言</param>/// <param name="voiceName">聲音</param>public void SetLanguageVoiceName(SpeechSynthesisLanguage language, SpeechSynthesisVoiceName voiceName){m_SpeechSynthesisLanguage = language.ToString().Replace('_', '-');m_SpeechSynthesisVoiceName = voiceName.ToString().Replace('_', '-');}/// <summary>/// 設(shè)置音源/// </summary>/// <param name="audioSource"></param>public void SetAudioSource(AudioSource audioSource){m_AudioSource = audioSource;}/// <summary>/// 開始TTS/// </summary>/// <param name="spkMsg"></param>/// <param name="errorAction"></param>public void StartTTS(string spkMsg, Action<string> errorAction = null){StopTTS();m_TTSCoroutine = StartCoroutine(SynthesizeAudioCoroutine(spkMsg, errorAction));}/// <summary>/// 開始TTS/// </summary>/// <param name="spkMsg"></param>/// <param name="audioSource"></param>/// <param name="errorAction"></param>public void StartTTS(string spkMsg, AudioSource audioSource, Action<string> errorAction = null){SetAudioSource(audioSource);StartTTS(spkMsg, errorAction);}/// <summary>/// 暫停TTS/// </summary>public void StopTTS(){// 釋放流if (m_AudioDataStream != null){m_AudioDataStream.Dispose();m_AudioDataStream = null;}if (m_TTSCoroutine != null){StopCoroutine(m_TTSCoroutine);m_TTSCoroutine = null;}if (m_AudioSource != null){m_AudioSource.Stop();m_AudioSource.clip = null;m_DataIndex = 0;}}/// <summary>/// 發(fā)起TTS/// </summary>/// <param name="speakMsg">TTS的文本</param>/// <param name="errorAction">錯(cuò)誤事件(目前沒有好的判斷方法)</param>/// <returns></returns>private IEnumerator SynthesizeAudioCoroutine(string speakMsg, Action<string> errorAction){var config = SpeechConfig.FromSubscription(m_SubscriptionKey, m_Region);config.SpeechSynthesisLanguage = m_SpeechSynthesisLanguage;config.SpeechSynthesisVoiceName = m_SpeechSynthesisVoiceName;var audioClip = AudioClip.Create("SynthesizedAudio", m_BufferSize, 1, m_SampleRate, false);m_AudioSource.clip = audioClip;using (var synthesizer = new SpeechSynthesizer(config, null)){var result = synthesizer.StartSpeakingTextAsync(speakMsg);yield return new WaitUntil(() => result.IsCompleted);m_AudioSource.Play();using (m_AudioDataStream = AudioDataStream.FromResult(result.Result)){MemoryStream memStream = new MemoryStream();byte[] buffer = new byte[m_UpdateSize * 2];uint bytesRead;do{bytesRead = m_AudioDataStream.ReadData(buffer);memStream.Write(buffer, 0, (int)bytesRead);if (memStream.Length >= m_UpdateSize * 2){var tempData = memStream.ToArray();var audioData = new float[m_UpdateSize];for (int i = 0; i < m_UpdateSize; ++i){audioData[i] = (short)(tempData[i * 2 + 1] << 8 | tempData[i * 2]) / 32768.0F;}audioClip.SetData(audioData, m_DataIndex);m_DataIndex = (m_DataIndex + m_UpdateSize) % m_BufferSize;memStream = new MemoryStream();yield return null;}} while (bytesRead > 0);}}if (m_DataIndex == 0){if (errorAction != null){errorAction.Invoke(" AudioData error");}}}
}/// <summary>
/// 添加更多的其他語言
/// 形式類似為 Zh_CN 對(duì)應(yīng) "zh-CN";
/// </summary>
public enum SpeechSynthesisLanguage
{Zh_CN,
}/// <summary>
/// 添加更多的其他聲音
/// 形式類似為 Zh_CN_XiaochenNeural 對(duì)應(yīng) "zh-CN-XiaochenNeural";
/// </summary>
public enum SpeechSynthesisVoiceName
{Zh_CN_XiaochenNeural,
}
4、MonoSingleton
using UnityEngine;
public class MonoSingleton<T> : MonoBehaviour where T : MonoBehaviour
{private static T instance;public static T Instance{get{if (instance == null){// 查找存在的實(shí)例instance = (T)FindObjectOfType(typeof(T));// 如果不存在實(shí)例,則創(chuàng)建if (instance == null){// 需要?jiǎng)?chuàng)建一個(gè)游戲?qū)ο?#xff0c;再把這個(gè)單例組件掛載到游戲?qū)ο笊蟰ar singletonObject = new GameObject();instance = singletonObject.AddComponent<T>();singletonObject.name = typeof(T).ToString() + " (Singleton)";// 讓實(shí)例不在切換場(chǎng)景時(shí)銷毀DontDestroyOnLoad(singletonObject);}}return instance;}}
}
附加:
聲音設(shè)置相關(guān)
語言支持 - 語音服務(wù) - Azure Cognitive Services | Azure Docs
中國部分聲音選擇設(shè)置:
wuu-CN | 中文(吳語,簡(jiǎn)體) | wuu-CN-XiaotongNeural 1,2(女)wuu-CN-YunzheNeural 1,2(男) |
yue-CN | 中文(粵語,簡(jiǎn)體) | yue-CN-XiaoMinNeural 1,2(女)yue-CN-YunSongNeural 1,2(男) |
zh-cn | 中文(普通話,簡(jiǎn)體) | zh-cn-XiaochenNeural (女)zh-cn-XiaohanNeural (女)zh-cn-XiaomengNeural (女)zh-cn-XiaomoNeural (女)zh-cn-XiaoqiuNeural (女)zh-cn-XiaoruiNeural (女)zh-cn-XiaoshuangNeural (女性、兒童)zh-cn-XiaoxiaoNeural (女)zh-cn-XiaoxuanNeural (女)zh-cn-XiaoyanNeural (女)zh-cn-XiaoyiNeural (女)zh-cn-XiaoyouNeural (女性、兒童)zh-cn-XiaozhenNeural (女)zh-cn-YunfengNeural (男)zh-cn-YunhaoNeural (男)zh-cn-YunjianNeural (男)zh-cn-YunxiaNeural (男)zh-cn-YunxiNeural (男)zh-cn-YunyangNeural (男)zh-cn-YunyeNeural (男)zh-cn-YunzeNeural (男) |
zh-cn-henan | 中文(中原官話河南,簡(jiǎn)體) | zh-cn-henan-YundengNeural 2(男) |
zh-cn-liaoning | 中文(東北官話,簡(jiǎn)體) | zh-cn-liaoning-XiaobeiNeural 1,2(女) |
zh-cn-shaanxi | 中文(中原官話陜西,簡(jiǎn)體) | zh-cn-shaanxi-XiaoniNeural 1,2(女) |
zh-cn-shandong | 中文(冀魯官話,簡(jiǎn)體) | zh-cn-shandong-YunxiangNeural 2(男) |
zh-cn-sichuan | 中文(西南普通話,簡(jiǎn)體) | zh-cn-sichuan-YunxiNeural 1,2(男) |
zh-HK | 中文(粵語,繁體) | zh-HK-HiuGaaiNeural (女)zh-HK-HiuMaanNeural (女)zh-HK-WanLungNeural 1(男) |
zh-TW | 中文(臺(tái)灣普通話,繁體) | zh-TW-HsiaoChenNeural (女)zh-TW-HsiaoYuNeural (女)zh-TW-YunJheNeural (男) |
zu-ZA | 祖魯語(南非) | zu-ZA-ThandoNeural 2(女)zu-ZA-ThembaNeural 2(男) |