免費(fèi)做公司電子畫(huà)冊(cè)的網(wǎng)站怎么優(yōu)化網(wǎng)站
文章目錄
- 摘要
- M
- N
- O
- P
摘要
人工智能術(shù)語(yǔ)翻譯第四部分,包括I、J、K、L開(kāi)頭的詞匯!
M
英文術(shù)語(yǔ) | 中文翻譯 | 常用縮寫(xiě) | 備注 |
---|---|---|---|
Machine Learning Model | 機(jī)器學(xué)習(xí)模型 | ||
Machine Learning | 機(jī)器學(xué)習(xí) | ML | 機(jī)器學(xué)習(xí) |
Machine Translation | 機(jī)器翻譯 | MT | |
Macro Average | 宏平均 | ||
Macro-F1 | 宏F1 | ||
Macro-P | 宏查準(zhǔn)率 | ||
Macron-R | 宏查全率 | ||
Mahalanobis Distance | 馬哈拉諾比斯距離 | ||
Main Diagonal | 主對(duì)角線 | ||
Majority Voting | 絕對(duì)多數(shù)投票 | ||
Majority Voting Rule | 多數(shù)表決規(guī)則 | ||
Manhattan Distance | 曼哈頓距離 | ||
Manifold | 流形 | ||
Manifold Assumption | 流形假設(shè) | ||
Manifold Learning | 流形學(xué)習(xí) | ||
Manifold Tangent Classifier | 流形正切分類器 | ||
Margin | 間隔 | 統(tǒng)計(jì) | |
Margin Theory | 間隔理論 | ||
Marginal Distribution | 邊緣分布 | ||
Marginal Independence | 邊緣獨(dú)立性 | ||
Marginal Likelihood | 邊緣似然函數(shù) | ||
Marginal Probability Distribution | 邊緣概率分布 | ||
Marginalization | 邊緣化 | ||
Markov Blanket | 馬爾可夫毯 | ||
Markov Chain | 馬爾可夫鏈 | ||
Markov Chain Monte Carlo | 馬爾可夫鏈蒙特卡羅 | MCMC | |
Markov Decision Process | 馬爾可夫決策過(guò)程 | MDP | |
Markov Network | 馬爾可夫網(wǎng)絡(luò) | ||
Markov Process | 馬爾可夫過(guò)程 | ||
Markov Property | 馬爾可夫性質(zhì) | ||
Markov Random Field | 馬爾可夫隨機(jī)場(chǎng) | MRF | |
Mask | 掩碼 | ||
Mask Language Modeling | 掩碼語(yǔ)言模型化 | ||
Masked Self-Attention | 掩蔽自注意力 | ||
Mathematical Optimization | 數(shù)學(xué)優(yōu)化 | ||
Matrix | 矩陣 | ||
Matrix Calculus | 矩陣微積分 | ||
Matrix Completion | 矩陣補(bǔ)全 | ||
Matrix Decomposition | 矩陣分解 | ||
Matrix Inversion | 逆矩陣 | ||
Matrix Product | 矩陣乘積 | ||
Max Norm | 最大范數(shù) | ||
Max Pooling | 最大匯聚 | ||
Maxima | 極大值 | ||
Maximal Clique | 最大團(tuán) | ||
Maximization | 極大 | ||
Maximization Step | M步 | ||
Maximization-Maximization Algorithm | 極大-極大算法 | ||
Maximum A Posteriori | 最大后驗(yàn) | ||
Maximum A Posteriori Estimation | 最大后驗(yàn)估計(jì) | MAP | |
Maximum Entropy Model | 最大熵模型 | ||
Maximum Likelihood | 極大似然 | ||
Maximum Likelihood Estimation | 極大似然估計(jì) | MLE | |
Maximum Likelihood Method | 極大似然法 | ||
Maximum Margin | 最大間隔 | ||
Maximum Mean Discrepancy | 最大平均偏差 | ||
Maximum Posterior Probability Estimation | 最大后驗(yàn)概率估計(jì) | MAP | |
Maximum Weighted Spanning Tree | 最大帶權(quán)生成樹(shù) | ||
Maxout | Maxout | ||
Maxout Unit | Maxout單元 | ||
Mean | 均值 | ||
Mean Absolute Error | 平均絕對(duì)誤差 | ||
Mean And Covariance RBM | 均值和協(xié)方差RBM | ||
Mean Filed | 平均場(chǎng) | ||
Mean Filter | 均值濾波 | ||
Mean Pooling | 平均匯聚 | ||
Mean Product of Student t-Distribution | 學(xué)生 t 分布均值乘積 | ||
Mean Squared Error | 均方誤差 | ||
Mean-Covariance Restricted Boltzmann Machine | 均值-協(xié)方差受限玻爾茲曼機(jī) | ||
Mean-Field | 平均場(chǎng) | ||
Meanfield | 均勻場(chǎng) | ||
Measure Theory | 測(cè)度論 | ||
Measure Zero | 零測(cè)度 | ||
Median | 中位數(shù) | ||
Memory | 記憶 | ||
Memory Augmented Neural Network | 記憶增強(qiáng)神經(jīng)網(wǎng)絡(luò) | MANN | |
Memory Capacity | 記憶容量 | ||
Memory Cell | 記憶元 | ||
Memory Network | 記憶網(wǎng)絡(luò) | MN | |
Memory Segment | 記憶片段 | ||
Mercer Kernel | Mercer 核 | ||
Message | 消息 | ||
Message Passing | 消息傳遞 | ||
Message Passing Neural Network | 消息傳遞神經(jīng)網(wǎng)絡(luò) | MPNN | |
Meta-Learner | 元學(xué)習(xí)器 | ||
Meta-Learning | 元學(xué)習(xí) | ||
Meta-Optimization | 元優(yōu)化 | ||
Meta-Rule | 元規(guī)則 | ||
Metric | 指標(biāo) | ||
Metric Learning | 度量學(xué)習(xí) | ||
Micro Average | 微平均 | ||
Micro-F1 | 微F1 | ||
Micro-P | 微査準(zhǔn)率 | ||
Micro-R | 微查全率 | ||
Min-Max Normalization | 最小最大值規(guī)范化 | ||
Mini-Batch Gradient | 小批量梯度 | ||
Mini-Batch Gradient Descent | 小批量梯度下降法 | ||
Mini-Batch SGD | 小批次隨機(jī)梯度下降 | ||
Minibatch | 小批量 | ||
Minibatch Stochastic | 小批量隨機(jī) | ||
Minima | 極小值 | ||
Minimal Description Length | 最小描述長(zhǎng)度 | MDL | |
Minimax Game | 極小極大博弈 | ||
Minimum | 極小點(diǎn) | ||
Minkowski Distance | 閔可夫斯基距離 | ||
Misclassification Cost | 誤分類代價(jià) | ||
Mixing | 混合 | ||
Mixing Time | 混合時(shí)間 | ||
Mixture Density Network | 混合密度網(wǎng)絡(luò) | ||
Mixture Distribution | 混合分布 | ||
Mixture of Experts | 混合專家模型 | ||
Mixture-of-Gaussian | 高斯混合 | ||
Modality | 模態(tài) | ||
Mode | 峰值 | ||
Model | 模型 | ||
Model Averaging | 模型平均 | ||
Model Collapse | 模型坍塌 | ||
Model Complexity | 模型復(fù)雜度 | ||
Model Compression | 模型壓縮 | ||
Model Identifiability | 模型可辨識(shí)性 | ||
Model Parallelism | 模型并行 | ||
Model Parameter | 模型參數(shù) | ||
Model Predictive Control | 模型預(yù)測(cè)控制 | MPC | |
Model Selection | 模型選擇 | ||
Model-Agnostic Meta-Learning | 模型無(wú)關(guān)的元學(xué)習(xí) | MAML | |
Model-Based Learning | 有模型學(xué)習(xí) | ||
Model-Based Reinforcement Learning | 基于模型的強(qiáng)化學(xué)習(xí) | ||
Model-Free Learning | 免模型學(xué)習(xí) | ||
Model-Free Reinforcement Learning | 模型無(wú)關(guān)的強(qiáng)化學(xué)習(xí) | ||
Moment | 矩 | ||
Moment Matching | 矩匹配 | ||
Momentum | 動(dòng)量 | ||
Momentum Method | 動(dòng)量法 | ||
Monte Carlo | 蒙特卡羅 | ||
Monte Carlo Estimate | 蒙特卡羅估計(jì) | ||
Monte Carlo Integration | 蒙特卡羅積分 | ||
Monte Carlo Method | 蒙特卡羅方法 | ||
Moore’s Law | 摩爾定律 | ||
Moore-Penrose Pseudoinverse | Moore-Penrose 偽逆 | ||
Moral Graph | 端正圖/道德圖 | ||
Moralization | 道德化 | ||
Most General Unifier | 最一般合一置換 | ||
Moving Average | 移動(dòng)平均 | MA | |
Multi-Armed Bandit Problem | 多臂賭博機(jī)問(wèn)題 | ||
Multi-Class Classification | 多分類 | ||
Multi-Classifier System | 多分類器系統(tǒng) | ||
Multi-Document Summarization | 多文檔摘要 | ||
Multi-Head Attention | 多頭注意力 | ||
Multi-Head Self-Attention | 多頭自注意力 | ||
Multi-Hop | 多跳 | ||
Multi-Kernel Learning | 多核學(xué)習(xí) | ||
Multi-Label Classification | 多標(biāo)簽分類 | ||
Multi-Label Learning | 多標(biāo)記學(xué)習(xí) | ||
Multi-Layer Feedforward Neural Networks | 多層前饋神經(jīng)網(wǎng)絡(luò) | ||
Multi-Layer Perceptron | 多層感知機(jī) | MLP | |
Multi-Nominal Logistic Regression Model | 多項(xiàng)對(duì)數(shù)幾率回歸模型 | ||
Multi-Prediction Deep Boltzmann Machine | 多預(yù)測(cè)深度玻爾茲曼機(jī) | ||
Multi-Response Linear Regression | 多響應(yīng)線性回歸 | MLR | |
Multi-View Learning | 多視圖學(xué)習(xí) | ||
Multicollinearity | 多重共線性 | ||
Multimodal | 多峰值 | ||
Multimodal Learning | 多模態(tài)學(xué)習(xí) | ||
Multinomial Distribution | 多項(xiàng)分布 | ||
Multinoulli Distribution | Multinoulli分布 | ||
Multinoulli Output Distribution | Multinoulli輸出分布 | ||
Multiple Dimensional Scaling | 多維縮放 | ||
Multiple Linear Regression | 多元線性回歸 | MLR | 統(tǒng)計(jì) |
Multitask Learning | 多任務(wù)學(xué)習(xí) | ||
Multivariate Decision Tree | 多變量決策樹(shù) | ||
Multivariate Gaussian Distribution | 多元高斯分布 | ||
Multivariate Normal Distribution | 多元正態(tài)分布 | ||
Mutual Information | 互信息 | ||
Machine-Readable Data | 機(jī)器可讀的數(shù)據(jù) | ||
Mae | 平均絕對(duì)誤差 | MAE | |
Mahalanobis Distances | 馬氏距離 | 統(tǒng)計(jì) | |
Matrices | 矩陣 | 數(shù)學(xué) | |
Matthews Correlation Coefficient | 馬修斯相關(guān)系數(shù) | MCC | |
Maximum Likelihood Methods | 最大似然法 | 統(tǒng)計(jì) | |
Maximum Likelihood Procedures | 最大似然估計(jì)法 | 統(tǒng)計(jì) | |
MCTS Method | 蒙特卡洛樹(shù)搜索方法 | ||
Mean-Squared Error | 均方誤差 | 統(tǒng)計(jì)、機(jī)器學(xué)習(xí) | |
Mechanical Sympathy | 機(jī)械同感,軟硬件協(xié)同編程 | ||
Merging | 合并 | ||
Message Passing Neural Networks | 消息傳遞神經(jīng)網(wǎng)絡(luò) | MPNNS | |
Microarray Data | 微陣列數(shù)據(jù) | ||
Mini Batch | 小批次 | ||
Mining | 挖掘 | ||
Mining Out | 挖掘 | ||
Missing Values | 缺失值 | 統(tǒng)計(jì) | |
ML Algorithm | 機(jī)器學(xué)習(xí)算法 | ||
ML Modelling | 機(jī)器學(xué)習(xí)建模 | ||
ML Potentials | 機(jī)器學(xué)習(xí)勢(shì)能 | ||
ML-Driven | 機(jī)器學(xué)習(xí)驅(qū)動(dòng)的 | ||
ML-Driven Optimization | 機(jī)器學(xué)習(xí)驅(qū)動(dòng)的最優(yōu)化 | ||
MLP Neural Model | 多層感知機(jī)神經(jīng)模型 | ||
Model Construction | 模型構(gòu)建 | ||
Model Evaluation | 模型評(píng)估 | ||
Model Performance | 模型性能 | ||
Model Statistics | 模型統(tǒng)計(jì) | ||
Model Training | 模型訓(xùn)練 | 機(jī)器學(xué)習(xí) | |
Model Validation | 模型驗(yàn)證 | ||
Model-Based Iterative Reconstruction | 基于模型的迭代重建 | MBIR | |
Model-Construction | 模型構(gòu)建 | ||
Modelling Scenario | 建模場(chǎng)景 | ||
Molecular Graph Theory | 分子圖論 | ||
Molecular Modelling | 分子建模 | ||
Monte Carlo Tree Search | 蒙特卡洛樹(shù)搜索 | MCTS | 數(shù)學(xué) |
Moore’S Law | 摩爾定律 | 計(jì)算機(jī) | |
Multi-Agent Control System | 多智能體控制系統(tǒng) | ||
Multi-Core Desktop Computer | 多核臺(tái)式計(jì)算機(jī) | 計(jì)算機(jī) | |
Multi-Dimensional Big Data Analysis | 多維度大數(shù)據(jù)分析 | ||
Multi-Layer Feed-Forward | 多層前饋 | MLFF | |
Multi-Objective Genetic Algorithm | 多目標(biāo)遺傳算法 | MOGA | |
Multi-Objective Optimization | 多目標(biāo)優(yōu)化 | 機(jī)器學(xué)習(xí) | |
Multi-Reaction Synthesis | 多反應(yīng)合成 | ||
Multilayer Perceptron | 多層感知機(jī) | ||
Multivariate Regression | 多變量回歸 |
N
英文術(shù)語(yǔ) | 中文翻譯 | 常用縮寫(xiě) | 備注 |
---|---|---|---|
N-Gram | N元 | ||
N-Gram Feature | N元特征 | ||
N-Gram Model | N元模型 | ||
Naive Bayes Algorithm | 樸素貝葉斯算法 | ||
Naive Bayes Classifier | 樸素貝葉斯分類器 | ||
Naive Bayes | 樸素貝葉斯 | NB | |
Named Entity Recognition | 命名實(shí)體識(shí)別 | ||
Narrow Convolution | 窄卷積 | ||
Nash Equilibrium | 納什均衡 | ||
Nash Reversion | 納什回歸 | ||
Nats | 奈特 | ||
Natural Exponential Decay | 自然指數(shù)衰減 | ||
Natural Language Generation | 自然語(yǔ)言生成 | NLG | |
Natural Language Processing | 自然語(yǔ)言處理 | NLP | 機(jī)器學(xué)習(xí) |
Nearest Neighbor | 最近鄰 | ||
Nearest Neighbor Classifier | 最近鄰分類器 | ||
Nearest Neighbor Graph | 最近鄰圖 | ||
Nearest Neighbor Regression | 最近鄰回歸 | ||
Nearest-Neighbor Search | 最近鄰搜索 | ||
Negative Class | 負(fù)類 | ||
Negative Correlation | 負(fù)相關(guān)法 | ||
Negative Definite | 負(fù)定 | ||
Negative Log Likelihood | 負(fù)對(duì)數(shù)似然函數(shù) | ||
Negative Part Function | 負(fù)部函數(shù) | ||
Negative Phase | 負(fù)相 | ||
Negative Sample | 負(fù)例 | ||
Negative Sampling | 負(fù)采樣 | ||
Negative Semidefinite | 半負(fù)定 | ||
Neighbourhood Component Analysis | 近鄰成分分析 | NCA | |
Nesterov Accelerated Gradient | Nesterov加速梯度 | NAG | |
Nesterov Momentum | Nesterov動(dòng)量法 | ||
Net Activation | 凈活性值 | ||
Net Input | 凈輸入 | ||
Network | 網(wǎng)絡(luò) | ||
Network Capacity | 網(wǎng)絡(luò)容量 | ||
Neural Architecture Search | 神經(jīng)架構(gòu)搜索 | NAS | |
Neural Auto-Regressive Density Estimator | 神經(jīng)自回歸密度估計(jì)器 | ||
Neural Auto-Regressive Network | 神經(jīng)自回歸網(wǎng)絡(luò) | ||
Neural Language Model | 神經(jīng)語(yǔ)言模型 | ||
Neural Machine Translation | 神經(jīng)機(jī)器翻譯 | ||
Neural Model | 神經(jīng)模型 | ||
Neural Network | 神經(jīng)網(wǎng)絡(luò) | NN | |
Neural Turing Machine | 神經(jīng)圖靈機(jī) | NTM | |
Neurodynamics | 神經(jīng)動(dòng)力學(xué) | ||
Neuromorphic Computing | 神經(jīng)形態(tài)計(jì)算 | ||
Neuron | 神經(jīng)元 | ||
Newton Method | 牛頓法 | ||
No Free Lunch Theorem | 沒(méi)有免費(fèi)午餐定理 | NFL | |
Node | 結(jié)點(diǎn) | ||
Noise | 噪聲 | ||
Noise Distribution | 噪聲分布 | ||
Noise-Contrastive Estimation | 噪聲對(duì)比估計(jì) | NCE | |
Nominal Attribute | 列名屬性 | ||
Non-Autoregressive Process | 非自回歸過(guò)程 | ||
Non-Convex Optimization | 非凸優(yōu)化 | ||
Non-Informative Prior | 無(wú)信息先驗(yàn) | ||
Non-Linear Model | 非線性模型 | ||
Non-Linear Oscillation | 非線性振蕩 | ||
Non-Linear Support Vector Machine | 非線性支持向量機(jī) | ||
Non-Metric Distance | 非度量距離 | ||
Non-Negative Matrix Factorization | 非負(fù)矩陣分解 | NMF | |
Non-Ordinal Attribute | 無(wú)序?qū)傩?/td> | ||
Non-Parametric | 非參數(shù) | ||
Non-Parametric Model | 非參數(shù)化模型 | ||
Non-Probabilistic Model | 非概率模型 | ||
Non-Saturating Game | 非飽和博弈 | ||
Non-Separable | 不可分 | ||
Nonconvex | 非凸 | ||
Nondistributed | 非分布式 | ||
Nondistributed Representation | 非分布式表示 | ||
Nonlinear Autoregressive With Exogenous Inputs Model | 有外部輸入的非線性自回歸模型 | NARX | |
Nonlinear Conjugate Gradients | 非線性共軛梯度 | ||
Nonlinear Independent Components Estimation | 非線性獨(dú)立成分估計(jì) | ||
Nonlinear Programming | 非線性規(guī)劃 | ||
Nonparametric Density Estimation | 非參數(shù)密度估計(jì) | ||
Norm | 范數(shù) | ||
Norm-Preserving | 范數(shù)保持性 | ||
Normal Distribution | 正態(tài)分布 | ||
Normal Equation | 正規(guī)方程 | ||
Normalization | 規(guī)范化 | 統(tǒng)計(jì)、機(jī)器學(xué)習(xí) | |
Normalization Factor | 規(guī)范化因子 | ||
Normalized | 規(guī)范化的 | ||
Normalized Initialization | 標(biāo)準(zhǔn)初始化 | ||
Nuclear Norm | 核范數(shù) | ||
Null Space | 零空間 | ||
Number of Epochs | 輪數(shù) | ||
Numerator Layout | 分子布局 | ||
Numeric Value | 數(shù)值 | ||
Numerical Attribute | 數(shù)值屬性 | ||
Numerical Differentiation | 數(shù)值微分 | ||
Numerical Method | 數(shù)值方法 | ||
Numerical Optimization | 數(shù)值優(yōu)化 | ||
N-Dimensional Space | N維空間 | ||
Naive Bayesian | 樸素貝葉斯 | 統(tǒng)計(jì) | |
Naive Bayesian Methods | 樸素貝葉斯方法 | 統(tǒng)計(jì) | |
Named Entity Recognition,NER | 命名實(shí)體識(shí)別 | NER | |
Nearest Neighbors | 近鄰 | ||
Nearest Neighbour Model | 近鄰模型 | ||
Negative Predictive Value | 陰性預(yù)測(cè)值 | NPV | |
Network Architecture | 網(wǎng)絡(luò)結(jié)構(gòu) | 機(jī)器學(xué)習(xí) | |
Network Geometry | 網(wǎng)絡(luò)幾何 | ||
Neural Turing Machines | 神經(jīng)圖靈機(jī) | NTM | |
Neural-Network-Based Function | 基于神經(jīng)網(wǎng)絡(luò)的函數(shù) | ||
Neurons | 神經(jīng)元 | 機(jī)器學(xué)習(xí) | |
Nuclear Magnetic Resonance | 核磁共振 | NMR | |
Noise Filters | 噪聲過(guò)濾器 | ||
Noise-Free | 無(wú)噪的 | ||
Non-Linear | 非線性 | 數(shù)學(xué)、統(tǒng)計(jì) | |
Non-Linear Correlation | 非線性相關(guān) | 統(tǒng)計(jì) | |
Non-Linearity | 非線性 | ||
Non-Parametric Algorithm | 非參數(shù)化學(xué)習(xí)算法 | ||
Non-Safety-Critical Applications | 非安全關(guān)鍵型應(yīng)用 | ||
Non-Steady-State | 非穩(wěn)態(tài) | ||
Non-Stochastic | 非隨機(jī)的 | ||
Non-Template | 非模板 | ||
Non-Template Methods | 非模板方法 | ||
Non-Zero Weight | 非零權(quán)重 |
O
英文術(shù)語(yǔ) | 中文翻譯 | 常用縮寫(xiě) | 備注 |
---|---|---|---|
Object Detection | 目標(biāo)檢測(cè) | ||
Object Recognition | 對(duì)象識(shí)別 | ||
Objective | 目標(biāo) | ||
Objective Function | 目標(biāo)函數(shù) | ||
Oblique Decision Tree | 斜決策樹(shù) | ||
Observable Variable | 觀測(cè)變量 | ||
Observation Sequence | 觀測(cè)序列 | ||
Occam’s Razor | 奧卡姆剃刀 | 機(jī)器學(xué)習(xí) | |
Odds | 幾率 | ||
Off-Policy | 異策略 | ||
Offline Inference | 離線推斷 | ||
Offset | 偏移量 | ||
Offset Vector | 偏移向量 | ||
On-Policy | 同策略 | ||
One-Shot Learning | 單試學(xué)習(xí) | ||
One-Dependent Estimator | 獨(dú)依賴估計(jì) | ODE | |
One-Hot | 獨(dú)熱 | ||
Online | 在線 | ||
Online Inference | 在線推斷 | ||
Online Learning | 在線學(xué)習(xí) | ||
Operation | 操作 | ||
Operator | 運(yùn)算符 | ||
Optimal Capacity | 最佳容量 | ||
Optimization | 最優(yōu)化 | ||
Optimization Landscape | 優(yōu)化地形 | ||
Optimizer | 優(yōu)化器 | ||
Ordered Rule | 帶序規(guī)則 | ||
Ordinal Attribute | 有序?qū)傩?/td> | ||
Origin | 原點(diǎn) | ||
Orthogonal | 正交 | 數(shù)學(xué) | |
Orthogonal Initialization | 正交初始化 | ||
Orthogonal Matrix | 正交矩陣 | ||
Orthonormal | 標(biāo)準(zhǔn)正交 | ||
Out-Of-Bag Estimate | 包外估計(jì) | ||
Outer Product | 外積 | ||
Outlier | 異常點(diǎn) | ||
Output | 輸出 | ||
Output Gate | 輸出門(mén) | ||
Output Layer | 輸出層 | 機(jī)器學(xué)習(xí) | |
Output Smearing | 輸出調(diào)制法 | ||
Output Space | 輸出空間 | ||
Over-Parameterized | 過(guò)度參數(shù)化 | ||
Overcomplete | 過(guò)完備 | ||
Overestimation | 過(guò)估計(jì) | ||
Overfitting | 過(guò)擬合 | 機(jī)器學(xué)習(xí) | |
Overfitting Regime | 過(guò)擬合機(jī)制 | ||
Overflow | 上溢 | ||
Oversampling | 過(guò)采樣 | ||
On-The-Fly Optimization | 運(yùn)行中優(yōu)化 | 計(jì)算機(jī) | |
One-Hot Vector | 獨(dú)熱向量 | 整個(gè)矢量中之后一個(gè)數(shù)為1 其余為0 | |
Open-Source | 開(kāi)源 | 軟件工程 | |
Open-Source Dataset | 開(kāi)源數(shù)據(jù)集 | 機(jī)器學(xué)習(xí) |
P
英文術(shù)語(yǔ) | 中文翻譯 | 常用縮寫(xiě) | 備注 |
---|---|---|---|
PAC Learning | PAC學(xué)習(xí) | ||
Pac-Learnable | PAC可學(xué)習(xí) | ||
Padding | 填充 | ||
Paired t -Test | 成對(duì) t 檢驗(yàn) | ||
Pairwise | 成對(duì)型 | ||
Pairwise Markov Property | 成對(duì)馬爾可夫性 | ||
Parallel Distributed Processing | 分布式并行處理 | PDP | |
Parallel Tempering | 并行回火 | ||
Parameter | 參數(shù) | ||
Parameter Estimation | 參數(shù)估計(jì) | ||
Parameter Server | 參數(shù)服務(wù)器 | ||
Parameter Sharing | 參數(shù)共享 | ||
Parameter Space | 參數(shù)空間 | ||
Parameter Tuning | 調(diào)參 | 機(jī)器學(xué)習(xí) | |
Parametric Case | 有參情況 | ||
Parametric Density Estimation | 參數(shù)密度估計(jì) | ||
Parametric Model | 參數(shù)化模型 | ||
Parametric ReLU | 參數(shù)化修正線性單元/參數(shù)化整流線性單元 | PReLU | |
Parse Tree | 解析樹(shù) | ||
Part-Of-Speech Tagging | 詞性標(biāo)注 | ||
Partial Derivative | 偏導(dǎo)數(shù) | ||
Partially Observable Markov Decision Processes | 部分可觀測(cè)馬爾可夫決策過(guò)程 | POMDP | |
Particle Swarm Optimization | 粒子群優(yōu)化算法 | PSO | |
Partition | 劃分 | ||
Partition Function | 配分函數(shù) | ||
Path | 路徑 | ||
Pattern | 模式 | ||
Pattern Recognition | 模式識(shí)別 | PR | |
Penalty Term | 罰項(xiàng) | ||
Perceptron | 感知機(jī) | 機(jī)器學(xué)習(xí) | |
Performance Measure | 性能度量 | ||
Periodic | 周期的 | ||
Permutation Invariant | 置換不變性 | ||
Perplexity | 困惑度 | ||
Persistent Contrastive Divergence | 持續(xù)性對(duì)比散度 | ||
Phoneme | 音素 | ||
Phonetic | 語(yǔ)音 | ||
Pictorial Structure | 圖形結(jié)構(gòu) | ||
Piecewise | 分段 | ||
Piecewise Constant Decay | 分段常數(shù)衰減 | ||
Pipeline | 流水線 | ||
Plate Notation | 板塊表示 | ||
Plug And Play Generative Network | 即插即用生成網(wǎng)絡(luò) | ||
Plurality Voting | 相對(duì)多數(shù)投票 | ||
Point Estimator | 點(diǎn)估計(jì) | ||
Pointer Network | 指針網(wǎng)絡(luò) | ||
Polarity Detection | 極性檢測(cè) | ||
Policy | 策略 | ||
Policy Evaluation | 策略評(píng)估 | ||
Policy Gradient | 策略梯度 | ||
Policy Improvement | 策略改進(jìn) | ||
Policy Iteration | 策略迭代 | ||
Policy Search | 策略搜索 | ||
Polynomial Basis Function | 多項(xiàng)式基函數(shù) | ||
Polynomial Kernel Function | 多項(xiàng)式核函數(shù) | ||
Polysemy | 一詞多義性 | ||
Pool | 匯聚 | ||
Pooling | 匯聚 | ||
Pooling Function | 匯聚函數(shù) | ||
Pooling Layer | 匯聚層 | ||
Poor Conditioning | 病態(tài)條件 | ||
Position Embedding | 位置嵌入 | ||
Positional Encoding | 位置編碼 | ||
Positive Class | 正類 | ||
Positive Definite | 正定 | ||
Positive Definite Kernel Function | 正定核函數(shù) | ||
Positive Definite Matrix | 正定矩陣 | ||
Positive Part Function | 正部函數(shù) | ||
Positive Phase | 正相 | ||
Positive Recurrent | 正常返的 | ||
Positive Sample | 正例 | ||
Positive Semidefinite | 半正定 | ||
Positive-Semidefinite Matrix | 半正定矩陣 | ||
Post-Hoc Test | 后續(xù)檢驗(yàn) | ||
Post-Pruning | 后剪枝 | ||
Posterior Distribution | 后驗(yàn)分布 | ||
Posterior Inference | 后驗(yàn)推斷 | ||
Posterior Probability | 后驗(yàn)概率 | ||
Potential Function | 勢(shì)函數(shù) | ||
Power Method | 冪法 | ||
PR Curve | P-R曲線 | ||
Pre-Trained Initialization | 預(yù)訓(xùn)練初始化 | ||
Pre-Training | 預(yù)訓(xùn)練 | ||
Precision | 查準(zhǔn)率/準(zhǔn)確率 | 數(shù)學(xué)、HPC | |
Precision Matrix | 精度矩陣 | ||
Predictive Sparse Decomposition | 預(yù)測(cè)稀疏分解 | ||
Prepruning | 預(yù)剪枝 | ||
Pretrained Language Model | 預(yù)訓(xùn)練語(yǔ)言模型 | ||
Primal Problem | 主問(wèn)題 | ||
Primary Visual Cortex | 初級(jí)視覺(jué)皮層 | ||
Principal Component Analysis | 主成分分析 | PCA | |
Principle Of Multiple Explanations | 多釋原則 | ||
Prior | 先驗(yàn) | ||
Prior Knowledge | 先驗(yàn)知識(shí) | 統(tǒng)計(jì) | |
Prior Probability | 先驗(yàn)概率 | ||
Prior Probability Distribution | 先驗(yàn)概率分布 | ||
Prior Pseudo-Counts | 偽計(jì)數(shù) | ||
Prior Shift | 先驗(yàn)偏移 | ||
Priority Rule | 優(yōu)先級(jí)規(guī)則 | ||
Probabilistic Context-Free Grammar | 概率上下文無(wú)關(guān)文法 | ||
Probabilistic Density Estimation | 概率密度估計(jì) | ||
Probabilistic Generative Model | 概率生成模型 | ||
Probabilistic Graphical Model | 概率圖模型 | PGM | |
Probabilistic Latent Semantic Analysis | 概率潛在語(yǔ)義分析 | PLSA | |
Probabilistic Latent Semantic Indexing | 概率潛在語(yǔ)義索引 | PLSI | |
Probabilistic Model | 概率模型 | ||
Probabilistic PCA | 概率PCA | ||
Probabilistic Undirected Graphical Model | 概率無(wú)向圖模型 | ||
Probability | 概率 | ||
Probability Density Function | 概率密度函數(shù) | ||
Probability Distribution | 概率分布 | 統(tǒng)計(jì) | |
Probability Mass Function | 概率質(zhì)量函數(shù) | ||
Probability Model Estimation | 概率模型估計(jì) | ||
Probably Approximately Correct | 概率近似正確 | PAC | |
Product of Expert | 專家之積 | ||
Product Rule | 乘法法則 | ||
Properly PAC Learnable | 恰PAC可學(xué)習(xí) | ||
Proportional | 成比例 | ||
Proposal Distribution | 提議分布 | ||
Propositional Atom | 原子命題 | ||
Propositional Rule | 命題規(guī)則 | ||
Prototype-Based Clustering | 原型聚類 | ||
Proximal Gradient Descent | 近端梯度下降 | PGD | |
Pruning | 剪枝 | ||
Pseudo-Label | 偽標(biāo)記 | ||
Pseudolikelihood | 偽似然 | ||
Predicted Label | 預(yù)測(cè)值 | 機(jī)器學(xué)習(xí) | |
Prediction | 預(yù)測(cè) | 機(jī)器學(xué)習(xí) | |
Prediction Accuracy | 預(yù)測(cè)準(zhǔn)確率 | 機(jī)器學(xué)習(xí) | |
Predictor | 預(yù)測(cè)器/決策函數(shù) | 機(jī)器學(xué)習(xí) | |
Protein Folding | 蛋白折疊 | 生物 |