nearest neighbor
compare representations using distance functions, assign class label of closest known representation
k-nearest neighbor
like nearest neighbor, but only let k nearest vote -> robustness!
hyperparameters of k-NN
distance function
number of neighbors k
data split for ML
unsupervised learning
no labels/target classes, instead find out properties of structure of data
supervised learning
labels or target classes are provided
reinforcement learning
agents interact with environment and receive feedback/reward
regression
predict continuous output value
classification
predict discrete value (binary or multi-class)
linear decision boundaries
fit hyperplane into data space (yes this is very simplistic)
linear regression: idea
find linear model that explains target y given inputs x
linear regression: formula
d number of features
theta weights
linear regression: prediction in matrix form
where
rows of X are feature vectors
loss function
measures quality of estimation and tells optimization how to improve estimate
optimization
changes model to improve loss function
Linear Least Squares: regular notation
regular notation:
Linear Least Squares: Matrix notation
Linear Least Squares: closed form solution
Linear Least Squares: when to use?
linear data
Maximum Likelihood: idea
find parameters that maximize the likelihood of observations given
MLE: formula
MLE: prerequisites/assumptions
i.i.d. (independent and identically distributed)
MLE vs Least Squares: relation
assuming Gaussian for MLE: same optimal parameters
Logistic Regression: sigmoid function
maps onto [0, 1]
Logistic Regression: prediction for binary
bernoulli probability
Logistic Regression: Binary Cross Entropy
General Cross Entropy
no closed-form solution
Zuletzt geändertvor einem Jahr