中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁(yè) > news >正文

泰安百度網(wǎng)站建設(shè)百度seo怎么提高排名

泰安百度網(wǎng)站建設(shè),百度seo怎么提高排名,免費(fèi)建網(wǎng)站系統(tǒng),唯品會(huì)網(wǎng)站建設(shè)建議訓(xùn)練自己的中文word2vec(詞向量)–skip-gram方法 什么是詞向量 ? 將單詞映射/嵌入(Embedding)到一個(gè)新的空間,形成詞向量,以此來表示詞的語義信息,在這個(gè)新的空間中,語義相同的單…

訓(xùn)練自己的中文word2vec(詞向量)–skip-gram方法

什么是詞向量

? 將單詞映射/嵌入(Embedding)到一個(gè)新的空間,形成詞向量,以此來表示詞的語義信息,在這個(gè)新的空間中,語義相同的單詞距離很近。

Skip-Gram方法(本次使用方法)

? 以某個(gè)詞為中心,分別計(jì)算該中心詞前后可能出現(xiàn)其他詞的各個(gè)概率,即給定input word來預(yù)測(cè)上下文。

Image Name

CBOW(Continous Bags Of Words,CBOW)

? CBOW根據(jù)某個(gè)詞前面的n個(gè)詞、或者前后各n個(gè)連續(xù)的詞,來計(jì)算某個(gè)詞出現(xiàn)的概率,即給定上下文,來預(yù)測(cè)input word。相比Skip-Gram,CBOW更快一些。

本次使用 Skip-Gram方法和三國(guó)演義第一章作為數(shù)據(jù),訓(xùn)練32維中文詞向量。

數(shù)據(jù)代碼下載鏈接見文末

導(dǎo)入庫(kù)

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset,DataLoader
import re
import collections
import numpy as np
import jieba
#指定設(shè)備
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)

導(dǎo)入數(shù)據(jù)

因?yàn)樗懔栴},我在此出只截取三國(guó)演義的第一章作為示例數(shù)據(jù)。

training_file = '/home/mw/input/sanguo5529/三國(guó)演義.txt'#讀取text文件,并選擇第一章作為輸入文本
def get_ch_lable(txt_file):  labels= ""with open(txt_file, 'rb') as f:for label in f:labels =labels+label.decode('utf-8')text = re.findall('第1章.*?第2章', labels,re.S)return text[0]
training_data =get_ch_lable(training_file)
# print(training_data)
print("總字?jǐn)?shù)",len(training_data))

總字?jǐn)?shù) 4945

分詞

#jieba分詞
def fenci(training_data):seg_list = jieba.cut(training_data)  # 默認(rèn)是精確模式  training_ci = " ".join(seg_list)training_ci = training_ci.split()#以空格將字符串分開training_ci = np.array(training_ci)training_ci = np.reshape(training_ci, [-1, ])return training_ci
training_ci =fenci(training_data)
print("總詞數(shù)",len(training_ci))

總詞數(shù) 3053

構(gòu)建詞表

def build_dataset(words, n_words):count = [['UNK', -1]]count.extend(collections.Counter(words).most_common(n_words - 1))dictionary = dict()for word, _ in count:dictionary[word] = len(dictionary)data = list()unk_count = 0for word in words:if word in dictionary:index = dictionary[word]else:index = 0  # dictionary['UNK']unk_count += 1data.append(index)count[0][1] = unk_countreversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))return data, count, dictionary, reversed_dictionarytraining_label, count, dictionary, words = build_dataset(training_ci, 3053)
#計(jì)算詞頻
word_count = np.array([freq for _,freq in count], dtype=np.float32)
word_freq = word_count / np.sum(word_count)#計(jì)算每個(gè)詞的詞頻
word_freq = word_freq ** (3. / 4.)#詞頻變換
words_size = len(dictionary)
print("字典詞數(shù)",words_size) 
print('Sample data', training_label[:10], [words[i] for i in training_label[:10]])

字典詞數(shù) 1456
Sample data [100, 305, 140, 306, 67, 101, 307, 308, 46, 27]
[‘第’, ‘1’, ‘章’, ‘宴’, ‘桃園’, ‘豪杰’, ‘三’, ‘結(jié)義’, ‘?dāng)亍? ‘黃巾’]

制作數(shù)據(jù)集

C = 3 
num_sampled = 64  # 負(fù)采樣個(gè)數(shù)   
BATCH_SIZE = 32  
EMBEDDING_SIZE = 32  #想要的詞向量長(zhǎng)度class SkipGramDataset(Dataset):def __init__(self, training_label, word_to_idx, idx_to_word, word_freqs):super(SkipGramDataset, self).__init__()self.text_encoded = torch.Tensor(training_label).long()self.word_to_idx = word_to_idxself.idx_to_word = idx_to_wordself.word_freqs = torch.Tensor(word_freqs)def __len__(self):return len(self.text_encoded)def __getitem__(self, idx):idx = min( max(idx,C),len(self.text_encoded)-2-C)#防止越界center_word = self.text_encoded[idx]pos_indices = list(range(idx-C, idx)) + list(range(idx+1, idx+1+C))pos_words = self.text_encoded[pos_indices] #多項(xiàng)式分布采樣,取出指定個(gè)數(shù)的高頻詞neg_words = torch.multinomial(self.word_freqs, num_sampled+2*C, False)#True)#去掉正向標(biāo)簽neg_words = torch.Tensor(np.setdiff1d(neg_words.numpy(),pos_words.numpy())[:num_sampled]).long()return center_word, pos_words, neg_wordsprint('制作數(shù)據(jù)集...')
train_dataset = SkipGramDataset(training_label, dictionary, words, word_freq)
dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE,drop_last=True, shuffle=True)

制作數(shù)據(jù)集…

#將數(shù)據(jù)集轉(zhuǎn)化成迭代器
sample = iter(dataloader)	
#從迭代器中取出一批次樣本				
center_word, pos_words, neg_words = sample.next()				
print(center_word[0],words[np.compat.long(center_word[0])],[words[i] for i in pos_words[0].numpy()])

模型構(gòu)建

class Model(nn.Module):def __init__(self, vocab_size, embed_size):super(Model, self).__init__()self.vocab_size = vocab_sizeself.embed_size = embed_sizeinitrange = 0.5 / self.embed_sizeself.in_embed = nn.Embedding(self.vocab_size, self.embed_size, sparse=False)self.in_embed.weight.data.uniform_(-initrange, initrange)def forward(self, input_labels, pos_labels, neg_labels):input_embedding = self.in_embed(input_labels)pos_embedding = self.in_embed(pos_labels)neg_embedding = self.in_embed(neg_labels)log_pos = torch.bmm(pos_embedding, input_embedding.unsqueeze(2)).squeeze()log_neg = torch.bmm(neg_embedding, -input_embedding.unsqueeze(2)).squeeze()log_pos = F.logsigmoid(log_pos).sum(1)log_neg = F.logsigmoid(log_neg).sum(1)loss = log_pos + log_negreturn -loss
model = Model(words_size, EMBEDDING_SIZE).to(device)
model.train()valid_size = 32
valid_window = words_size/2  # 取樣數(shù)據(jù)的分布范圍.
valid_examples = np.random.choice(int(valid_window), valid_size, replace=False)optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
NUM_EPOCHS = 10

開始訓(xùn)練

for e in range(NUM_EPOCHS):for ei, (input_labels, pos_labels, neg_labels) in enumerate(dataloader):input_labels = input_labels.to(device)pos_labels = pos_labels.to(device)neg_labels = neg_labels.to(device)optimizer.zero_grad()loss = model(input_labels, pos_labels, neg_labels).mean()loss.backward()optimizer.step()if ei % 20 == 0:print("epoch: {}, iter: {}, loss: {}".format(e, ei, loss.item()))if e %40 == 0:           norm = torch.sum(model.in_embed.weight.data.pow(2),-1).sqrt().unsqueeze(1)normalized_embeddings = model.in_embed.weight.data / normvalid_embeddings = normalized_embeddings[valid_examples]similarity = torch.mm(valid_embeddings, normalized_embeddings.T)for i in range(valid_size):valid_word = words[valid_examples[i]]top_k = 8  # 取最近的排名前8的詞nearest = (-similarity[i, :]).argsort()[1:top_k + 1]  #argsort函數(shù)返回的是數(shù)組值從小到大的索引值log_str = 'Nearest to %s:' % valid_word  for k in range(top_k):close_word = words[nearest[k].cpu().item()]log_str = '%s,%s' % (log_str, close_word)print(log_str)
epoch: 0, iter: 0, loss: 48.52019500732422
epoch: 0, iter: 20, loss: 48.51792526245117
epoch: 0, iter: 40, loss: 48.50772476196289
epoch: 0, iter: 60, loss: 48.50897979736328
epoch: 0, iter: 80, loss: 48.45783996582031
Nearest to 伏山:,,,,遙望,操忽心生,江渚上,,提刀
Nearest to 與:,,將次,蓋地,,下文,。,妖術(shù),玄德
Nearest to 必獲:,聽調(diào),直取,各處,一端,奏帝,必出,遙望,入帳
Nearest to 郭勝十人:,南華,因?yàn)?span id="vxwlu0yf4"    class="token punctuation">,把,賊戰(zhàn),告變,,,玄德遂
Nearest to 官軍:,秀才,統(tǒng)兵,說起,軍齊出,四散,放蕩,一彪,云長(zhǎng)
Nearest to 碧眼:,名備,劉焉然,大漢,卷入,大勝,老人,重棗,左有
Nearest to 天書:,泛溢,約期,而進(jìn),龔景,因起,車蓋,遂解,震
Nearest to 轉(zhuǎn)頭:,近因,直取,,七尺,,備下,大漢,齊聲
Nearest to 張寶:,侯覽,驚告嵩,誓畢,帝驚,呼風(fēng)喚雨,狂風(fēng),大將軍,曾
Nearest to 玄德幼:,直取,近聞,劉焉令,幾度,,臨江仙,右有翼德,左右兩
Nearest to 不祥:,無數(shù),調(diào)兵,項(xiàng),劉備,玄德謝,原來,八尺,共
Nearest to 靖:,操忽心生,此病,趙忠,劉焉然,,莊田,,傳至
Nearest to 五千:,豈可,丹鳳眼,北行,聽罷,,性命,,之囚
Nearest to 日非:,趙忠,,聞得,,破賊,早喪,書報(bào),忽起
Nearest to 徐:,震怒,我氣,盧植,,結(jié)為,,燕頷虎須,。
Nearest to 五百余:,帝驚,,本部,,神功,桓帝,滾滾,左右兩
Nearest to 而:,轉(zhuǎn)頭,卷入,,近因,大商,,人公,天子
Nearest to 去:,封谞,夏惲,周末,,嵩信,廣宗,人氏,民心
Nearest to 上:,,,陷邕,四年,關(guān)羽,直趕,九尺,伏山
Nearest to ::,,,,,兄弟,,來代,我答
Nearest to 后:,必獲,閣下,,手起,祭禮,侍奉,各處,奏帝
Nearest to 因起:,帝覽奏,,,汝可引,奪路,一把,是非成敗,卷入
Nearest to 驟起:,挾恨,張寶稱,明公宜,,一統(tǒng)天下,,,玄德請(qǐng)
Nearest to 漢室:,六月,臨江仙,今漢運(yùn),手起,威力,抹額,訛言,提刀
Nearest to 云游四方:,背義忘恩,復(fù),漁樵,地公,揚(yáng)鞭,,故冒姓,截住
Nearest to 桓:,,趙忠,劉焉然,左有,劉備,名備,二帝,游蕩
Nearest to 二字于:,操故,,白土,左右兩,張角本,賞勞,當(dāng)時(shí),梁上
Nearest to 人出:,,五十匹,,奏帝,梁上,九尺,六月,大漢
Nearest to 大浪:,卷入,臨江仙,聽調(diào),漢武時(shí),左有,束草,圍城,及
Nearest to 青:,奪路,,販馬,師事,圍城,卷入,大勝,客人
Nearest to 郎蔡邕:,濁酒,近聞,六月,角戰(zhàn)于,中郎將,,轉(zhuǎn)頭,眾大潰
Nearest to 二月:,馬舞刀,國(guó)譙郡,只見,內(nèi)外,郎蔡邕,,落到,汝得
epoch: 1, iter: 0, loss: 48.46757888793945
epoch: 1, iter: 20, loss: 48.42853546142578
epoch: 1, iter: 40, loss: 48.35804748535156
epoch: 1, iter: 60, loss: 48.083805084228516
epoch: 1, iter: 80, loss: 48.1635856628418
epoch: 2, iter: 0, loss: 47.89817428588867
epoch: 2, iter: 20, loss: 48.067501068115234
epoch: 2, iter: 40, loss: 48.6464729309082
epoch: 2, iter: 60, loss: 47.825260162353516
epoch: 2, iter: 80, loss: 48.07224655151367
epoch: 3, iter: 0, loss: 48.15058898925781
epoch: 3, iter: 20, loss: 47.26418685913086
epoch: 3, iter: 40, loss: 47.87504577636719
epoch: 3, iter: 60, loss: 48.74541473388672
epoch: 3, iter: 80, loss: 48.01288986206055
epoch: 4, iter: 0, loss: 47.257896423339844
epoch: 4, iter: 20, loss: 48.337745666503906
epoch: 4, iter: 40, loss: 47.70765686035156
epoch: 4, iter: 60, loss: 48.57493591308594
epoch: 4, iter: 80, loss: 48.206268310546875
epoch: 5, iter: 0, loss: 47.139137268066406
epoch: 5, iter: 20, loss: 48.70667266845703
epoch: 5, iter: 40, loss: 47.97750473022461
epoch: 5, iter: 60, loss: 48.098899841308594
epoch: 5, iter: 80, loss: 47.778892517089844
epoch: 6, iter: 0, loss: 47.86349105834961
epoch: 6, iter: 20, loss: 47.77979278564453
epoch: 6, iter: 40, loss: 48.67324447631836
epoch: 6, iter: 60, loss: 48.117042541503906
epoch: 6, iter: 80, loss: 48.69907760620117
epoch: 7, iter: 0, loss: 47.63265609741211
epoch: 7, iter: 20, loss: 47.82151794433594
epoch: 7, iter: 40, loss: 48.54405212402344
epoch: 7, iter: 60, loss: 48.06487274169922
epoch: 7, iter: 80, loss: 48.67494583129883
epoch: 8, iter: 0, loss: 48.053466796875
epoch: 8, iter: 20, loss: 47.872459411621094
epoch: 8, iter: 40, loss: 47.462432861328125
epoch: 8, iter: 60, loss: 48.10865783691406
epoch: 8, iter: 80, loss: 46.380184173583984
epoch: 9, iter: 0, loss: 47.2872314453125
epoch: 9, iter: 20, loss: 48.553428649902344
epoch: 9, iter: 40, loss: 47.00652313232422
epoch: 9, iter: 60, loss: 47.970741271972656
epoch: 9, iter: 80, loss: 48.159828186035156

查看訓(xùn)練好的詞向量

final_embeddings = normalized_embeddings
labels = words[10]
print(labels)
print(final_embeddings[10])

玄德
tensor([-0.2620, 0.0660, 0.0464, 0.2948, -0.1974, 0.2471, -0.0893, 0.1720,
-0.1488, 0.0283, -0.1165, 0.2156, -0.1642, -0.2376, -0.0356, -0.0607,
0.1985, -0.2166, 0.2222, 0.2453, -0.1414, -0.0526, 0.1153, -0.1325,
-0.2964, 0.2775, -0.0637, -0.0716, 0.2672, 0.0539, 0.1697, 0.0489])

with open('skip-gram-sanguo.txt', 'a') as f:    for i in range(len(words)):f.write(words[i] + str(list(final_embeddings.numpy()[i])) + '\n')
f.close()
print('word vectors have written done.')

word vectors have written done.

按照路徑/home/mw/project/skip-gram-sanguo.txt查看保存的文件,不一定要保存為txt,我們平常加載的詞向量更多是vec格式

Image Name

數(shù)據(jù)代碼下載鏈接

數(shù)據(jù)及代碼右上角fork后可以免費(fèi)獲取

http://www.risenshineclean.com/news/2337.html

相關(guān)文章:

  • 潞城建設(shè)局網(wǎng)站蘋果cms永久免費(fèi)建站程序
  • 網(wǎng)站上想放個(gè)蘋果地圖怎么做短視頻seo是什么
  • 文員工作內(nèi)容手機(jī)管家一鍵優(yōu)化
  • 淘寶購(gòu)物式wordpress懷柔網(wǎng)站整站優(yōu)化公司
  • 蕪湖龍湖建設(shè)網(wǎng)站中國(guó)知名網(wǎng)站排行榜
  • 有沒有幫別人做圖片的網(wǎng)站賺錢關(guān)鍵詞調(diào)詞平臺(tái)哪個(gè)好
  • 做期貨都看那些網(wǎng)站b站推廣引流最佳方法
  • 美侖美家具的網(wǎng)站誰做的網(wǎng)站seo優(yōu)化方法
  • p2p網(wǎng)站開發(fā)新浪微輿情大數(shù)據(jù)平臺(tái)
  • 廣告網(wǎng)站模板下載不了接外包項(xiàng)目的網(wǎng)站
  • 深圳網(wǎng)站開發(fā)公司西安網(wǎng)站建設(shè)網(wǎng)絡(luò)推廣
  • 南通住房城鄉(xiāng)建設(shè)委官方網(wǎng)站微信群推廣平臺(tái)有哪些
  • 做360網(wǎng)站優(yōu)化蘇州關(guān)鍵詞優(yōu)化軟件
  • 泛解析對(duì)網(wǎng)站的影響百度問問首頁(yè)
  • 陽(yáng)谷網(wǎng)站建設(shè)公司網(wǎng)店運(yùn)營(yíng)教學(xué)
  • 精美企業(yè)網(wǎng)站seo數(shù)據(jù)優(yōu)化教程
  • H5網(wǎng)站建設(shè)報(bào)價(jià)多少網(wǎng)站優(yōu)化排名公司
  • 數(shù)據(jù)庫(kù)檢索網(wǎng)站建設(shè)快速優(yōu)化seo軟件
  • 米客優(yōu)品的網(wǎng)站是哪做的中視頻自媒體平臺(tái)注冊(cè)官網(wǎng)
  • 上海集團(tuán)網(wǎng)站制作杭州seo公司哪家好
  • 商業(yè)網(wǎng)站的建設(shè)與制作世界500強(qiáng)企業(yè)名單
  • b2b電子商務(wù)網(wǎng)站交易流程百度信息流投放在哪些平臺(tái)
  • 韓語淘寶代購(gòu)網(wǎng)站建設(shè)東莞關(guān)鍵詞自動(dòng)排名
  • 建設(shè)網(wǎng)站時(shí)以什么為導(dǎo)向性價(jià)比高seo排名
  • 電商網(wǎng)站的付款功能域名ip查詢
  • 天將建設(shè)集團(tuán)有限公司網(wǎng)站什么文案容易上熱門
  • 一品威客做任務(wù)要給網(wǎng)站錢嗎江門網(wǎng)站建設(shè)模板
  • 德國(guó)網(wǎng)站域名后綴外鏈怎么做
  • 哈爾濱做網(wǎng)站哪好小紅書推廣引流
  • 咸陽(yáng)市城鄉(xiāng)建設(shè)規(guī)劃局網(wǎng)站企業(yè)網(wǎng)絡(luò)營(yíng)銷成功案例