Domain adaptions
Problem: Model we train on one training set does not perform well on another data with other intensity character
CNNs are not robust against domain shift
Domain: sinset of samples showing similar variations
Source domain: subset which the models are trained
Target domain: subset wgich models are applied
Domain shift is target domain is not subset of source domain —> algorithms are not generalizing
Naive Solution
Seperate training for each domain with separate training set and final model
Inference at each domain with specific model
+ If engough labels present —> best performing model
- requires a lot of data in every domain
Transfer learning
Inital training on the source domain with many samples
FIne-tune model at the target domain
Moel training starts with optimal source domain parameters
+ Only few labeled samples at the target domain
- Target domain still requires labels for fine-tuning
Reduce the number of needed samples by freezing layers
Change only the batch norm layers in the network
Unsupervised domain adaption
Training with source domain an unlabeled target domain
Supervise loss: task specific, use labels of source domain
Adversarial loss: measures distance between feature distribution of samples from source and target domains
+ Does not require any labeled images at target domain
- Need of transporting eacg target domain
Extreme data augmentation
Transformations of the data sample = new sample
Geometric transformations: affine/elastic transformations
Intensity transformations: blur, noise, gamma, etc.
Tackle lack of data: mimic new domains
Stacked transformations:
Imag quality transformations: sharpness, blurry, noise
Image appereance transformations: brightness, contrast
Through the augmentation the source domain is expanded hoping that the expanded set overlaps with the target domain
Meta learning
Design flexible models generalizing well on new domains
Idea: Split dataset to many small ones and learn to lern
Introduce meta train step
Parameter update step:
Unsupervised source free domain adaption - Entropy
Entropy: Uncertainty of a prediction
Assumption: pushing confidence of predictions higher —> leads to goo domain generalizations
+ Easy to implement, no prior information, applicable to different tasks
- Strong assumption, many test samples needed
Unsupervised source free domain adaption - Autoencoder
Autoencode loss of input, output and intermediate features
AE: map from image to image itself x = D(E(x)) where D decodes and e encodes
Idea: Reconstruct input, intermediate and output features with AE on source domain —> new images are converted to source domaine-like images —> right predictions
+ Applicable to different tasks, strong prior model
- Input, intermediate features can be prone to dom shifts
Unsupervised source free domain adaption - DAE
Denoising auto encoder: map from noisy version to itself x=D(E(x_noise))
Same idea as with AE: getting source like images
+ Accurate segmentation results, model not affected by domain shift
- Specific to segmentation task
Zuletzt geändertvor einem Jahr