Problem definition
Training free models - Deep Image Prior
DIP fitting with L2 loss; z is a random image of the same size as x
Before fitting x perfectly, model reconstructs clean image
It is more difficult to fit the noise and corruption
Using Unet architecture (often without skip connections)
Application: Radon transform
+ works well, no trainung data, no domain adaption
- worse result han supervised; bad for low frequency corruption
Learning from unlabeld examples - Noise2Noise
Given: distribution of noisy imahes for the sample p(y)
With infinite data: average of noise samples is clean sample
Alternative formulation:
Learning from unlabeld examples - Self-training
1. Train with labeled data with ground truths
2. Pseudo labels for unlabeled data with trained model
3. Retrain with labeled and pseudo labeled data (using weight factor)
4. Iterate over Steps 2 and 3
Learning from unlabeld examples - Teacher-student-models
Train 2 interacting networks for generating pseudo-labels
One evolove slower than the other one
Total loss including consistency loss:
1. For given teacher net, train student net
2. Update teacher nets parameters
Learning from unlabeld examples - Auxillary losses (Semi-supervised learning)
Semi supervised loss:
Similar representations for similar classes
Enforce representations of close samples to be close
Closeness by affinity matrix:
Labeled samples: pixels with the same label are close
Unlabeled: pixels sourrounding image patches are similiar
Compact representations can be enforced by minimizing the distance between close samples and max not closed
Closeness in the original space acts as sirrogate label
Final cost function:
Transformation consistency
Final semi-supervised loss function:
Learning from unlabeld examples - Self-supervised loss
Idea: train network with pre-text task only requires images
Finerune network with few labeled samples
Challenge: Define pre-task which learns useful features
Example: rotation angle, intesities, etc.
Contrastive Learning
Surrogate labeling: random transformations of the same image are close
Force close images to be close in representation space
Generating large scale labeled data not feasible
Manual annotations can be costly to obtain; Costly ground truth images, bad aquiring techniques, labeling problems
Goal: being able to work ith few labeled images
Many unlabeled examples: images without labels
Last changeda year ago