What is a big problem we have in correspondence estimation (w.r.t. the quality)?
in practice -> correspondences usually contamined by outliers (i.e. mismatches)
What are some reasons for outliers?
repetitive patterns (i.e. features with same appearance)
geometric and photometric changes (consistency assumption does not hold in real world…)
large image noise
occlusions
moving objects
image or motion blur
What are some effects of outliers on visual odometry?
inaccurate trajectory caused by noise
wrong trajectory caused by outliers
What is the goal of robust estimatino?
remove outliers and also estaimate the model parameters
What is robust estimatino essential for?
for reliable and accurate localization
What are some dominant methods for outlier removal?
expectation maximization
ransac
robust kernel (M-estimator)
To what problems can the outlier removal methods be applied?
various geometric problems
whose goals are to estimate the parameters of a model from data
e.g.:
essential matrix estination
fundamental matrix estimation
PnP
homography estimation
What is the EM method?
expectaion maximization
-> general method for model fitting in the presence of outliers
How does EM work?
has iterations
-> at each iteration, performs E-step and M-step
E-step:
assign each data a weight/outlier ratio
M-step:
compute model parameters based on a weighted (obtained from E step) least-squares fitting
What is the first step in the EM method?
initialization based on least-squares fitting
-> simply fit model (i.e. line) to datapoints using least squares of distance
no information about outliers and inliers -> treat all points equally
=> result obviously affected by outliers
What is the second step in the EM method?
E-step -> weight assignment
based on estimatoin -> calculate residual error for each data point
use residual error to weight each data point (inversely proportional to weight
-> high residual error -> low weight
-> low residual error -> high weight
wi = e^-ri (exp function of negative residual error)
weight from 0-1 is probability that data is an inlier
What is the third step in the EM method?
M-step: -> re-estimation of moderl parameters
formulate weighted least squares loss
weighted leas squares parameters are new model parametsre
-> should be better than last iteration…
What is the step 4 in the EM method?
alternately perform step 2 and 3 till convergence
-> accordingily, optimal parameters are obtained
What is step 5 in the EM method?
treat data whose weights are higher than a threshold as inliers
What are limitations of EM?
highly sensitive to initialization
-> strongly affected by few large error values (i.e. outliers)
-> EM might converge to wrong soluton if initial solution is far from groundtruth
What is RANSAC?
random sample consensus (RANSAC)
->one of the most popular methods for model fitting in the presence of outliers
What is an advantage of RANSAC over EM?
not sensitive to initial condition
Is RANSAC deterministic?
no
-> returns different result everytime you run it
What is the goal of RANSAC?
estimate model parameters
remove outliers
What is the first step of RANSAC?
sample minimnal case at random
-> i.e. line fitting 2 points, plane ditting 8; tsais -> 6
What is step 2 in RANSAC?
calcualte model parametesr using sampled data (hypothesizing a model)
What is step 3 in ransac?
after obtaining model parameters
-> calcualte resodual error w.r.t. all datapoints
What is step 4 in ransac?
select data that supports current hypothesis
for each model (as we sample and fit more than one) associate one set of inliers
What is the assumption of RANSACs 4th step?
if model computed by true inliers
-> associated inlier set should have high cardinality
What is step 5 in ransac?
repeat setp 1-4 k times
-> hypothesize k models after k samplings
-> k models associated with k inlier sets
What is step 6 in RANSAC?
inlier set maximization
-> select inlier set with highest cardinality
What is step 7 in RANSAC?
calculate final model parameters using optimal inlier set
-> use all true inliers of optimal inlier set
-> to compute least squares solition
=> i.e. use only optimal inliers to compute the model parameters a last time -> can further imporve accuracy by noise compensation
-> altough step 6 is often good enough
With ransac, do we usually prefer the minimal solution or an overdetermined
i.e. 8 point method vs 5 point method
-> can be used for both
but: sampling 5 correct points is more likely than sampling 8 correct points
=> RANSAC prefers to sue minimal solutoin
Is there a case when we should choose overdetrermines solution with ransac?
yes
-> if we can guarantee that all chosen points are inliers…
is EM deterministic?
What is the idea of the robust kernel (M-estimator)?
L2 loss usually sensitive to outliers (as they are amplified)
L2 closed form -> can be used for
=> introduction of Huber loss as robust kernel (i.e. loss function)
What is the advantage of huber over L2?
more robust against outliers
Why use huber loss?
uses L2 in low error regions
uses L1 loss in high error regions
avoids outlier amplificatoin
L1 difficult for derivative computation -> also somewhat compensated by huber
What is connection between huber loss (i.e. robust kernel) and EM/RANSAC?
EM/RANSAC -> iterative to remove outliers
if outlier count relative low: directly use Huber loss and accept taht outliers are present (huber relative resistant against them…)
-> huber as choice for robust kernel in photometric error…
Zuletzt geändertvor einem Jahr