📕 subnode [[@KGBicheno/ai glossary]]
in 📚 node [[ai-glossary]]
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/AI Glossary.md by @KGBicheno
AI Glossary File Index
See the [[Main AI Page]] or the [[Master of Philosophy - Main Page]].
Relevant to:
- [[Week 1 - Introduction]]
- [[Week 2 - Introduction]]
- [[Week 3 - Introduction]]
- [[Feminist Chatbot Main Page]]
- [[GOLEM Project Page]]
- [[Economic Indicators List]]
A
- [[A-B Testing]]
- [[Accuracy]]
- [[Action]]
- [[Activation Function]]
- [[Active Learning]]
- [[Adagrad]]
- [[Agent]]
- [[Agglomerative Clustering]]
- [[Ar]]
- [[Area Under The Pr Curve]]
- [[Area Under The Roc Curve]]
- [[Artificial General Intelligence]]
- [[Artificial Intelligence]]
- [[Attribute]]
- [[Auc (Area Under The Roc Curve)]]
- [[Augmented Reality]]
- [[Automation Bias]]
- [[Average Precision]]
B
- [[Backpropagation]]
- [[Bag Of Words]]
- [[Baseline]]
- [[Batch]]
- [[Batch Normalization]]
- [[Batch Size]]
- [[Bayesian Neural Network]]
- [[Bellman Equation]]
- [[Bias (Ethics-Fairness)]]
- [[Bias (Math)]]
- [[Binary Classification]]
- [[Boosting]]
- [[Bounding Box]]
- [[Broadcasting]]
- [[Bucketing]]
C
- [[Calibration Layer]]
- [[Candidate Generation]]
- [[Candidate Sampling]]
- [[Categorical Data]]
- [[Centroid-Based Clustering]]
- [[Centroid]]
- [[Checkpoint]]
- [[Class-Imbalanced Dataset]]
- [[Class]]
- [[Classification Model]]
- [[Classification Threshold]]
- [[Clipping]]
- [[Cloud Tpu]]
- [[Clustering]]
- [[Co-Adaptation]]
- [[Collaborative Filtering]]
- [[Confirmation Bias]]
- [[Confusion Matrix]]
- [[Continuous Feature]]
- [[Convenience Sampling]]
- [[Convergence]]
- [[Convex Function]]
- [[Convex Optimization]]
- [[Convex Set]]
- [[Convolution]]
- [[Convolutional Filter]]
- [[Convolutional Layer]]
- [[Convolutional Neural Network]]
- [[Convolutional Operation]]
- [[Cost]]
- [[Counterfactual Fairness]]
- [[Coverage Bias]]
- [[Crash Blossom]]
- [[Critic]]
- [[Cross-Entropy]]
- [[Cross-Validation]]
- [[Custom Estimator]]
D
- [[Data Analysis]]
- [[Data Augmentation]]
- [[Data Set Or Dataset]]
- [[Dataframe]]
- [[Dataset Api (Tf.Data)]]
- [[Decision Boundary]]
- [[Decision Threshold]]
- [[Decision Tree]]
- [[Deep Model]]
- [[Deep Neural Network]]
- [[Deep Q-Network (Dqn)]]
- [[Demographic Parity]]
- [[Dense Feature]]
- [[Dense Layer]]
- [[Depth]]
- [[Depthwise Separable Convolutional Neural Network (Sepcnn)]]
- [[Device]]
- [[Dimension Reduction]]
- [[Dimensions]]
- [[Discrete Feature]]
- [[Discriminative Model]]
- [[Discriminator]]
- [[Disparate Impact]]
- [[Disparate Treatment]]
- [[Divisive Clustering]]
- [[Downsampling]]
- [[Dqn]]
- [[Dynamic Model]]
E
- [[Eager Execution]]
- [[Early Stopping]]
- [[Embedding Space]]
- [[Embeddings]]
- [[Empirical Risk Minimization (Erm)]]
- [[Ensemble]]
- [[Environment]]
- [[Episode]]
- [[Epoch]]
- [[Epsilon Greedy Policy]]
- [[Equality Of Opportunity]]
- [[Equalized Odds]]
- [[Estimator]]
- [[Example]]
- [[Experience Replay]]
- [[Experimenter'S Bias]]
- [[Exploding Gradient Problem]]
F
- [[Fairness Constraint]]
- [[Fairness Metric]]
- [[False Negative (Fn)]]
- [[False Positive (Fp)]]
- [[False Positive Rate (Fpr)]]
- [[Feature]]
- [[Feature Column (Tf.Feature Column)]]
- [[Feature Cross]]
- [[Feature Engineering]]
- [[Feature Extraction]]
- [[Feature Set]]
- [[Feature Spec]]
- [[Feature Vector]]
- [[Federated Learning]]
- [[Feedback Loop]]
- [[Feedforward Neural Network (Ffn)]]
- [[Few-Shot Learning]]
- [[Fine Tuning]]
- [[Forget Gate]]
- [[Full Softmax]]
- [[Fully Connected Layer]]
G
- [[Gan]]
- [[Generalization]]
- [[Generalization Curve]]
- [[Generalized Linear Model]]
- [[Generative Adversarial Network (Gan)]]
- [[Generative Model]]
- [[Generator]]
- [[Glossary]]
- [[Gradient]]
- [[Gradient Clipping]]
- [[Gradient Descent]]
- [[Graph]]
- [[Graph Execution]]
- [[Greedy Policy]]
- [[Ground Truth]]
- [[Group Attribution Bias]]
H
- [[Hashing]]
- [[Heuristic]]
- [[Hidden Layer]]
- [[Hierarchical Clustering]]
- [[Hinge Loss]]
- [[Holdout Data]]
- [[Hyperparameter]]
- [[Hyperplane]]
I
- [[I.I.D.]]
- [[Image Recognition]]
- [[Imbalanced Dataset]]
- [[Implicit Bias]]
- [[In-Group Bias]]
- [[Incompatibility Of Fairness Metrics]]
- [[Independently And Identically Distributed (I.I.D)]]
- [[Individual Fairness]]
- [[Inference]]
- [[Input Function]]
- [[Input Layer]]
- [[Instance]]
- [[Inter-Rater Agreement]]
- [[Interpretability]]
- [[Intersection Over Union (Iou)]]
- [[Iou]]
- [[Item Matrix]]
- [[Items]]
- [[Iteration]]
K
- [[K-Means]]
- [[K-Median]]
- [[Keras]]
- [[Kernel Support Vector Machines (Ksvms)]]
- [[Keypoints]]
L
- [[L1 Loss]]
- [[L1 Regularization]]
- [[L2 Loss]]
- [[L2 Regularization]]
- [[Label]]
- [[Labeled Example]]
- [[Lambda]]
- [[Landmarks]]
- [[Layer]]
- [[Layers Api (Tf.Layers)]]
- [[Learning Rate]]
- [[Least Squares Regression]]
- [[Linear Model]]
- [[Linear Regression]]
- [[Log-Odds]]
- [[Log Loss]]
- [[Logistic Regression]]
- [[Logits]]
- [[Long Short-Term Memory (Lstm)]]
- [[Loss]]
- [[Loss Curve]]
- [[Loss Surface]]
- [[Lstm]]
M
- [[Machine Learning]]
- [[Majority Class]]
- [[Markov Decision Process .mdp)]]
- [[Markov Property]]
- [[Matplotlib]]
- [[Matrix Factorization]]
- [[Mean Absolute Error (Mae)]]
- [[Mean Squared Error (Mse)]]
- [[Metric]]
- [[Metrics Api (Tf.Metrics)]]
- [[Mini-Batch]]
- [[Mini-Batch Stochastic Gradient Descent (Sgd)]]
- [[Minimax Loss]]
- [[Minority Class]]
- [[Ml]]
- [[Mnist]]
- [[Model]]
- [[Model Capacity]]
- [[Model Function]]
- [[Model Training]]
- [[Momentum]]
- [[Multi-Class Classification]]
- [[Multi-Class Logistic Regression]]
- [[Multinomial Classification]]
N
- [[N-Gram]]
- [[Nan Trap]]
- [[Natural Language Understanding]]
- [[Negative Class]]
- [[Neural Network]]
- [[Neuron]]
- [[Nlu]]
- [[Node (Neural Network)]]
- [[Node (Tensorflow Graph)]]
- [[Noise]]
- [[Non-Response Bias]]
- [[Normalization]]
- [[Numerical Data]]
- [[Numpy]]
O
- [[Objective]]
- [[Objective Function]]
- [[Offline Inference]]
- [[One-Hot Encoding]]
- [[One-Shot Learning]]
- [[One-Vs.-All]]
- [[Online Inference]]
- [[Operation (Op)]]
- [[Optimizer]]
- [[Out-Group Homogeneity Bias]]
- [[Outliers]]
- [[Output Layer]]
- [[Overfitting]]
P
- [[Pandas]]
- [[Parameter]]
- [[Parameter Server (Ps)]]
- [[Parameter Update]]
- [[Partial Derivative]]
- [[Participation Bias]]
- [[Partitioning Strategy]]
- [[Perceptron]]
- [[Performance]]
- [[Perplexity]]
- [[Pipeline]]
- [[Policy]]
- [[Polysemous]]
- [[Pooling]]
- [[Positive Class]]
- [[Post-Processing]]
- [[Pr Auc (Area Under The Pr Curve)]]
- [[Pre-Trained Model]]
- [[Precision-Recall Curve]]
- [[Precision]]
- [[Prediction]]
- [[Prediction Bias]]
- [[Predictive Parity]]
- [[Predictive Rate Parity]]
- [[Premade Estimator]]
- [[Preprocessing]]
- [[Prior Belief]]
- [[Proxy (Sensitive Attributes)]]
- [[Proxy Labels]]
Q
- [[Q-Function]]
- [[Q-Learning]]
- [[Quantile]]
- [[Quantile Bucketing]]
- [[Quantization]]
- [[Queue]]
R
- [[Random Forest]]
- [[Random Policy]]
- [[Rank (Ordinality)]]
- [[Rank (Tensor)]]
- [[Rater]]
- [[Re-Ranking]]
- [[Recall]]
- [[Recommendation System]]
- [[Rectified Linear Unit (Relu)]]
- [[Recurrent Neural Network]]
- [[Regression Model]]
- [[Regularization]]
- [[Regularization Rate]]
- [[Reinforcement Learning (Rl)]]
- [[Replay Buffer]]
- [[Reporting Bias]]
- [[Representation]]
- [[Return]]
- [[Reward]]
- [[Ridge Regularization]]
- [[Rnn]]
- [[Roc (Receiver Operating Characteristic) Curve]]
- [[Root Directory]]
- [[Root Mean Squared Error (Rmse)]]
- [[Rotational Invariance]]
S
- [[Sampling Bias]]
- [[Savedmodel]]
- [[Saver]]
- [[Scalar]]
- [[Scaling]]
- [[Scikit-Learn]]
- [[Scoring]]
- [[Selection Bias]]
- [[Semi-Supervised Learning]]
- [[Sensitive Attribute]]
- [[Sentiment Analysis]]
- [[Sequence Model]]
- [[Serving]]
- [[Session (Tf.Session)]]
- [[Shape (Tensor)]]
- [[Sigmoid Function]]
- [[Similarity Measure]]
- [[Size Invariance]]
- [[Sketching]]
- [[Softmax]]
- [[Sparse Feature]]
- [[Sparse Representation]]
- [[Sparse Vector]]
- [[Sparsity]]
- [[Spatial Pooling]]
- [[Squared Hinge Loss]]
- [[Squared Loss]]
- [[State-Action Value Function]]
- [[State]]
- [[Static Model]]
- [[Stationarity]]
- [[Step]]
- [[Step Size]]
- [[Stochastic Gradient Descent (Sgd)]]
- [[Stride]]
- [[Structural Risk Minimization (Srm)]]
- [[Subsampling]]
- [[Summary]]
- [[Synthetic Feature]]
T
- [[Tabular Q-Learning]]
- [[Target]]
- [[Target Network]]
- [[Temporal Data]]
- [[Tensor]]
- [[Tensor Processing Unit (Tpu)]]
- [[Tensor Rank]]
- [[Tensor Shape]]
- [[Tensor Size]]
- [[Tensorboard]]
- [[Tensorflow]]
- [[Tensorflow Playground]]
- [[Tensorflow Serving]]
- [[Termination Condition]]
- [[Test Set]]
- [[Tf.Example]]
- [[Tf.Keras]]
- [[Time Series Analysis]]
- [[Timestep]]
- [[Tower]]
- [[Tpu]]
- [[Tpu Chip]]
- [[Tpu Device]]
- [[Tpu Master]]
- [[Tpu Node]]
- [[Tpu Pod]]
- [[Tpu Resource]]
- [[Tpu Slice]]
- [[Tpu Type]]
- [[Tpu Worker]]
- [[Training]]
- [[Training Set]]
- [[Trajectory]]
- [[Transfer Learning]]
- [[Translational Invariance]]
- [[Trigram]]
- [[True Negative (Tn)]]
- [[True Positive (Tp)]]
- [[True Positive Rate (Tpr)]]
U
- [[Unawareness (To A Sensitive Attribute)]]
- [[Underfitting]]
- [[Unlabeled Example]]
- [[Unsupervised Machine Learning]]
- [[Upweighting]]
- [[User Matrix]]
V
- [[Validation]]
- [[Validation Set]]
- [[Vanishing Gradient Problem]]
W
- [[Wasserstein Loss]]
- [[Weight]]
- [[Weighted Alternating Least Squares (Wals)]]
- [[Wide Model]]
- [[Width]]
Go back to the [[Master Contents Page]]
📖 stoas
- public document at doc.anagora.org/ai-glossary
- video call at meet.jit.si/ai-glossary