What is and what causes tangential distortion?
lens is misaligned (not perfectly paralell to image sensor)
What is radial distortion and what causes it?
rays bend more near edges of a lens than they do at its optical center
-> not perfect lens…
-> projection is smaller
-> projection is larger
=> as this is around the whole lens, creates certain distortoin pattern
What is the standard way to deal with radial distortion?
transform from ideal (non-distorted) coordinates
to real (distorted) coordinates
(u,v) -> (ud, vd)
What is the amount of distortion for a given non-distprted image point (u,v) (in terms of function…)=
of its distance r
from the focalpoint
=> for most lenses, simple quadratic model of radial distortion is sufficient
What is the simple quadratic radial distortion formula?
-> scaling of distance from principal poiunt (u0, v0) with quadratic function…
What usually happens when we have wide-angle lenses w.r.t distortion?
distortion becomes more complicated…
not quadratic anymore…
Do we in practice acually deal with tangential distortion?=
mostly neglect it…
-> expecially when dealing with calibration
What are some limitations of digital images?
noise -> prefer high illumination
i.e. low light
compression -> trade off with memory usage
creates artifacts except in uncompressed formats (tiff, raw)
-> hard to extract accurate key points / lines
stabilization -> prefer to use
compensate for camera shake
mechanical vs electronic
influences key point extraction (blur)
What is the depth of field?
distance between nearest and farthest object
that appear acceptably sharp in an image
What is orthographic projection?
special type of paralell projection
where projection rays are perpendicular to projection plane
What is the application of orthographic projection?
can project 3D points to top-down view of scene
-> useful for mobile robots
as distance between obstacles are preserved
easy to interpret and utilize to perform
path planning and navigation tasks
What is a depth camera?
special camera capable of determining depth information of objects
can be used for 3D reconstruction
uses e.g. time-of-flight measurement
How can we backproject a depth image to a 3D point cloud?
have image plane
project back to 3D point using distance infromation
z -> simply depth information we have
x, y -> recular 3D projection based on image coordinate (j), camera center (cx) focal length fx and z
What is a problem with feature based (indirect) odometry? (using 3d mapping …)
very inefficient (no real time…)
How to solve the problem of inefficiency of indirect visual odometry?-
then event cameras (better than IMU)
How do event cameras work?
each pixel inside event camera operates independently and asynchronously
reporting changes in brightnes as they occur
stay silent otherwise
=> only used to detect change in object
-> less waste due to removing static background
What are exemplary resulting pictures of event cameras?
brighter pixels -> change of illumination from dark to light
darker pixels -> change of illumination from light to dark
grey -> static