What metrics do we have to measure the quality of an estimated camera pose?
algebraic error
epipolar line distance (only for 2D-2D)
reprojection error
How does the algebraic error work?
consider 8-point for illustration
=> nonzero when p1, p2 and T not coplanar…
How does the epipolar line distance work?
only for 2D-2D
sum of squared epipolar-to-line-to-point distances
How does epipolar line distance compare to reprojection error?
cheaper because does not require point triangulation
How does the reprojection error work?
sum of squared reprojection errors
How does reprojection error compare to other metrics introduced w.r.t. cost?
more costly
-> because requires to triangulate 3D points first…
What error metric is commonly preferred?
-> more accurate as directly computet w.r.t. original input data (image points)
golden standard
How do reprojection error and epipolar line distance compare w.r.t. what type of error they are?
epipolar: point to line
reprojection : point to point
For which tasks do we often use the reprojectoin error?
pose and 3D point optimization
acuracy evaluation
What is the definition of bundle adjustment?
extend two view minimization to
multi-view case -> called bundle adjustment
How can we treat bundle adjustnentt?-
graph optimization problem
-> nodes are parameters to optimize
-> edges are constraints
How do we jointly optimize camera poses in bundle adjustment?
where C are the camera pose estimations
and p is the huber norm for robust estimation
What are strategies for acceleration in bundle adjustment?
small window size (w.r.t. the number of concurrent views we take into consideration)
-> limits number of parameters to optimize
-> allows real time
use motion only BA
-> just optimizing over camera parameters and keep 3D points fixed…
What types of nonlinear optimization did we differentiate?
steepest method
newtons method
gauss newton method
what is the idea of steepest method?
use first order taylor expansion (for simplicity in our case) to approximate gradient
find the negative direction from gradient (i.e. one that reduces the loss)
and choose learning rate -> how much do we step in this direction…
What is the idea of newtons method?
use second-order taylor expansion
-> leverage first order optimality condition
-> takes more direct step as we have second derivative information
How do steepest method and newton compare?
steepest methnod: zigzag
newton: more drect but costuyl due to computation of hessian
What is the idea of gauss newton?
first order talor expansion as well
-> find optimal delta x to minimize loss
-> use an approximation of hessian instead of actually computing it
Zuletzt geändertvor einem Jahr