購物網(wǎng)站建設(shè)平臺(tái)莆田seo推廣公司
相關(guān)鏈接
(1)2023年美賽C題Wordle預(yù)測(cè)問題一建模及Python代碼詳細(xì)講解
(2)2023年美賽C題Wordle預(yù)測(cè)問題二建模及Python代碼詳細(xì)講解
(3)2023年美賽C題Wordle預(yù)測(cè)問題三、四建模及Python代碼詳細(xì)講解
(4)2023年美賽C題Wordle預(yù)測(cè)問題25頁論文
C題:Wordle預(yù)測(cè)
代碼運(yùn)行環(huán)境
編譯器:vsCode
編程語言:Python
如果要運(yùn)行代碼,出現(xiàn)錯(cuò)誤了,不要著急,百度一下錯(cuò)誤,一般都是哪個(gè)包沒有安裝,用conda命令或者pip命令都能安裝上。
1、問題一
1.1 第一小問
第一小問,建立一個(gè)時(shí)間序列預(yù)測(cè)模型,首先對(duì)數(shù)據(jù)按先后順序排序,查看數(shù)據(jù)分布
import pandas as pd
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import skew,kurtosispd.options.display.notebook_repr_html=False # 表格顯示
plt.rcParams['figure.dpi'] = 75 # 圖形分辨率
sns.set_theme(style='darkgrid') # 圖形主題df = pd.read_excel('data/Problem_C_Data_Wordle.xlsx',header=1)
data = df.drop(columns='Unnamed: 0')
data['Date'] = pd.to_datetime(data['Date'])
data.set_index("Date", inplace=True)
data.sort_index(ascending=True,inplace=True)
data
(1)查看數(shù)據(jù)分布
sns.lineplot(x="Date", y="Number of reported results",data=data)
plt.savefig('img/1.png',dpi=300)
plt.show()
(2)使用箱線圖進(jìn)行查看異常值,300000以上是異常值,黑色的,需要進(jìn)行處理,本代碼中采用的向前填充法,就是用異常值前一天的數(shù)據(jù)來填充。
sns.boxplot(data['Number of reported results'],color='red')
plt.savefig('img/2.png',dpi=300)
(3)因?yàn)镹umber of reported results是數(shù)值特征,在線性回歸模型中,為了取得更好的建模效果,在建立回歸評(píng)估模型之前,應(yīng)該檢查確認(rèn)樣本的分布,如果符合正態(tài)分布,則這種訓(xùn)練集是及其理想的,否則應(yīng)該補(bǔ)充完善訓(xùn)練集或者通過技術(shù)手段對(duì)訓(xùn)練集進(jìn)行優(yōu)化。由KDE圖和Q-Q圖可知,價(jià)格屬性呈右偏分布且不服從正態(tài)部分,在回歸之前需要對(duì)數(shù)據(jù)進(jìn)一步數(shù)據(jù)轉(zhuǎn)換。
import scipy.stats as st
plt.figure(figsize=(20, 6))
y = data.Numbers
plt.subplot(121)
plt.title('johnsonsu Distribution fitting',fontsize=20)
sns.distplot(y, kde=False, fit=st.johnsonsu, color='Red')y2 = data.Numbers
plt.subplot(122)
st.probplot(y2, dist="norm", plot=plt)
plt.title('Q-Q Figure',fontsize=20)
plt.xlabel('X quantile',fontsize=15)
plt.ylabel('Y quantile',fontsize=15)
plt.savefig('img/5.png',dpi=300)
plt.show()
轉(zhuǎn)換前
轉(zhuǎn)換后,注意,預(yù)測(cè)得到的結(jié)果,還要轉(zhuǎn)換回來,采用指數(shù)轉(zhuǎn)換。公式是log(x) =y,x=e^y。
import scipy.stats as st
plt.figure(figsize=(20, 6))
y = np.log(data.Numbers)
plt.subplot(121)
plt.title('johnsonsu Distribution fitting',fontsize=20)
sns.distplot(y, kde=False, fit=st.johnsonsu, color='Red')y2 = np.log(data.Numbers)
plt.subplot(122)
st.probplot(y2, dist="norm", plot=plt)
plt.title('Q-Q Figure',fontsize=20)
plt.xlabel('X quantile',fontsize=15)
plt.ylabel('Y quantile',fontsize=15)
plt.savefig('img/6.png',dpi=300)
plt.show()
(4)可視化所有特征與label的相關(guān)性,采用皮爾遜相關(guān)性方法,篩選相關(guān)性較高作為數(shù)據(jù)集的特征。得到41個(gè)特征。
# 可視化Top20相關(guān)性最高的特征
df =data.copy()
corr = df[["target_t1"]+features].corr().abs()
k = 15
col = corr.nlargest(k,'target_t1')['target_t1'].index
plt.subplots(figsize = (10,10))
plt.title("Pearson correlation with label")
sns.heatmap(df[col].corr(),annot=True,square=True,annot_kws={"size":14},cmap="YlGnBu")
plt.savefig('img/10.png',dpi=300)
plt.show()
(5)劃分?jǐn)?shù)據(jù)集前,需要標(biāo)準(zhǔn)化特征數(shù)據(jù),標(biāo)準(zhǔn)化后,將1-11月的數(shù)據(jù)作為訓(xùn)練集,12月的數(shù)據(jù)作為測(cè)試集。可以看到用簡(jiǎn)單線性回歸可以擬合曲線。
data_feateng = df[features + targets].dropna()
nobs= len(data_feateng)
print("樣本數(shù)量: ", nobs)
X_train = data_feateng.loc["2022-1":"2022-11"][features]
y_train = data_feateng.loc["2022-1":"2022-11"][targets]X_test = data_feateng.loc["2022-12"][features]
y_test = data_feateng.loc["2022-12"][targets]n, k = X_train.shape
print("Train: {}{}, \nTest: {}{}".format(X_train.shape, y_train.shape,X_test.shape, y_test.shape))plt.plot(y_train.index, y_train.target_t1.values, label="train")
plt.plot(y_test.index, y_test.target_t1.values, label="test")
plt.title("Train/Test split")
plt.legend()
plt.xticks(rotation=45)
plt.savefig('img/11.png',dpi=300)
plt.show()
(5)采用線性回歸
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_errorX_train = data_feateng.loc["2022-1":"2022-11"][features]
y_train = data_feateng.loc["2022-1":"2022-11"][targets]X_test = data_feateng.loc["2022-12"][features]
y_test = data_feateng.loc["2022-12"][targets]
reg = LinearRegression().fit(X_train, y_train["target_t1"])
p_train = reg.predict(X_train)
p_test = reg.predict(X_test)y_pred = np.exp(p_test*std+mean)
y_true = np.exp(y_test["target_t1"]*std+mean)RMSE_test = np.sqrt(mean_squared_error(y_true,y_pred))
print("Test RMSE: {}".format(RMSE_test))
模型誤差是RMSE: 1992.293296317915
模型訓(xùn)練和預(yù)測(cè)
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(X_train, y_train["target_t1"])
p_train = reg.predict(X_train)
arr = np.array(X_test).reshape((1,-1))
p_test = reg.predict(arr)y_pred = np.exp(p_test*std+mean)
print(f"預(yù)測(cè)區(qū)間是[{int(y_pred-RMSE_test)}至{int(y_pred+int(RMSE_test))}]")
預(yù)測(cè)得到的結(jié)果減去誤差,得到預(yù)測(cè)區(qū)間的左邊界,加上誤差,得到預(yù)測(cè)區(qū)間的右邊界。最后得出的預(yù)測(cè)區(qū)間是【18578-22562】
1.2 第二小問
我提取了每個(gè)單詞中每個(gè)字母位置的特征(如a編碼為1,b編碼為2,c編碼為3依次類推,z編碼為26,那5個(gè)單詞的位置就填入相應(yīng)的數(shù)值,類似于ont-hot編碼)、元音的字母的頻率(五個(gè)單詞中元音字母出現(xiàn)了幾次),輔音字母的頻率(5個(gè)單詞中輔音字母出現(xiàn)了幾次),還有一個(gè)是單詞的詞性(形容詞,副詞,名詞等等,這部分沒有做)
特征在代碼中未這幾個(gè):‘w1’,‘w2’,‘w3’,‘w4’,‘w5’,‘Vowel_fre’,‘Consonant_fre’
然后分別計(jì)算1-7次嘗試百分比與這幾個(gè)特征的相關(guān)性,采用皮爾遜相關(guān)性方法。同學(xué)們,繼續(xù)對(duì)圖片中的數(shù)值進(jìn)行解讀,應(yīng)用到論文中,可以用表格闡述。
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as snsdf = pd.read_excel('data/Problem_C_Data_Wordle.xlsx',header=1)
data = df.drop(columns='Unnamed: 0')
data['Date'] = pd.to_datetime(data['Date'])
df.set_index('Date',inplace=True)
df.sort_index(ascending=True,inplace=True)
df =data.copy()
df['Words'] = df['Word'].apply(lambda x:str(list(x))[1:-1].replace("'","").replace(" ",""))
df['w1'], df['w2'],df['w3'], df['w4'],df['w5'] = df['Words'].str.split(',',n=4).str
df
small = [str(chr(i)) for i in range(ord('a'),ord('z')+1)]
letter_map = dict(zip(small,range(1,27)))
letter_map
{‘a(chǎn)’: 1, ‘b’: 2, ‘c’: 3, ‘d’: 4, ‘e’: 5, ‘f’: 6, ‘g’: 7, ‘h’: 8, ‘i’: 9, ‘j’: 10, ‘k’: 11, ‘l’: 12, ‘m’: 13, ‘n’: 14, ‘o’: 15, ‘p’: 16, ‘q’: 17, ‘r’: 18, ‘s’: 19, ‘t’: 20, ‘u’: 21, ‘v’: 22, ‘w’: 23, ‘x’: 24, ‘y’: 25, ‘z’: 26}
df['w1'] = df['w1'].map(letter_map)
df['w2'] = df['w2'].map(letter_map)
df['w3'] = df['w3'].map(letter_map)
df['w4'] = df['w4'].map(letter_map)
df['w5'] = df['w5'].map(letter_map)
df
(1)統(tǒng)計(jì)元音輔音頻率
Vowel = ['a','e','i','o','u']
Consonant = list(set(small).difference(set(Vowel)))
def count_Vowel(s):c = 0for i in range(len(s)):if s[i] in Vowel:c+=1return c
def count_Consonant(s):c = 0for i in range(len(s)):if s[i] in Consonant:c+=1return cdf['Vowel_fre'] = df['Word'].apply(lambda x:count_Vowel(x))
df['Consonant_fre'] = df['Word'].apply(lambda x:count_Consonant(x))
df
(2)分析相關(guān)性
# 可視化Top20相關(guān)性最高的特征
features = ['w1','w2','w3','w4','w5','Vowel_fre','Consonant_fre']
label = ['1 try','6 tries','6 tries','6 tries','6 tries','6 tries','7 or more tries (X)']
n = 11
for i in label:corr = df[[i]+features].corr().abs()k = len(features)col = corr.nlargest(k,i)[i].indexplt.subplots(figsize = (10,10))plt.title(f"Pearson correlation with {i}")sns.heatmap(df[col].corr(),annot=True,square=True,annot_kws={"size":14},cmap="YlGnBu")plt.savefig(f'img/1/{n}.png',dpi=300)n+=1plt.show()
3 Code
Code獲取,在瀏覽器中輸入:betterbench.top/#/40/detail,或者Si我
剩下的問題二、三、四代碼實(shí)現(xiàn),在我主頁查看,陸續(xù)發(fā)布出來。