Differentiate indirect methods from direct ones again
direct: use full patch to compute i.e. brighness difference between pixels
indirect: determine features (i.e. points or lines) and find matching pairs to estimate patch correspondances
What is a blob?
group of connected pixels
taht share some common properties (e.g. grayscale value)
What is the aim of blob detectoin?-
identify and mark blobs (regions)
How is a corner defined?
intersectoin of two or mroe edges
How do Blobs and corners compare?
localization accuracy:
blob: lower
cornder: higher
distincrive:
blob: more
corner: less
What are the important methods to detect corners (we talked about)?
moravec
harris
shi-tomasi
susan
fast
What are the important methods to detect blobs (we talked about)?
MSER
LOG
DOG (SIFT)
SURFCenSurE
What is the general idea behind cornder detection?
in regoin aroud corner
-> gradient has multiple dominant directoins
-> e.g. vertical, horizontal, diagonal
-> shifting a window in any directoin should cause large intensity changes
edge: only changes on one axis (i.e. horizontal or vertical)
flag region: no changes
cornder: changes in all directoin
How do we measure the “change” when shifting a window in corner detection?
SSD
flat: SSD ~ 0
edge: SSD ~ 0 on one axis; SSD >> 0 on other axis
corner: SSD >> 0 on all axes
What is the SSD of a shifted window for cornder detection?
sum over all pixels
squared difference between intensity in pixel 1 - intensity of pixel 1 shifted by delta x, delta y
How does moravecs method for corner detecoitn work?
for two patches (initial pixel patch and shifted one), compute SSD between pixel pairs
lower SSD indicates higher similarity between patches
-> consider SSD among multiple directions
interest measurement: smallest SSD (i.e. if we find one small SSD -> should not be a corner…)
if interest measurement > threshold -> is corner
directions:
horizontal,
vertical
both diagonal directions
What is a drawback of Moravec we want to improve?
we have to physically shift the window
How can we imporve moravecs corner detection?
instead of shifting
-> approximate surrounding pixels wiht 1st order taylor espansion
=> intensity of shifted pixel is approximately the initial value plus the shift delta times the directional gradient
What does the improvement with directional gardients in moravecs method yield?
as for the shifted patch, we also have the regular inteisity of pixel x,y -> we can remove it altogether as (I(x,y)-I(x,y)-I_x(dx)-I_y(dy))
-> we simply calualte the SSD based on the directoinal x,y gradients at the positoin (x,y) times their respective delta
How can we decide based on the directional gradients wether we have a corner or not?
rewrite the elements of the sum so that we partition the deltas out
where the middle part is our M matrix
this matrix can be decomposed
if both eigenvalues are much larger than 0 -> we have a corner…
How is the improved method of Moravec called (i.e. with using eigenvalues and directional gradients)?
harris detector
What to consider when using harris method (w.r.t. patch size)?
harris detector is not scale invariant
-> same patch size not applicable to different scales of images
What is a descriptor?
description of pixel information aroud a feature
-> i.e. blob
What is a general approach to match poitns?
i.e. given we have a detected point (i.e. a corner)
-> define point descriptors (ile. HOG)
define distance function that compares two descriptors (i.e. SSD, SAD, NCC or hamming distance)
What is a naive approach to match points using descriptors?
brute force matching
-> we have set of N descriptors in both images
-> calculate distance between all pairs and choose best fit (O(N^2))
What is an issue when choosing the best match using the “closest” descriptor matching? (i.e. lowest distance)
algo can ocassionally return good scores for false matches
How to improve / avoid the problem that matching algo (distance between descriptors) sometimes yields good resuilts for false matches?
comute ratio of distances to 1st and 2nd closest descriptor
-> d1 / d2 < threshold (usually 0.8)
-> only if distance for closest descriptor is larger than 0.8 (threshold) times distance to 2nd closest descriptor -> consider it a match…
=> distinctive enough
What distance functions did we discuss?
SSD (sum of squared distances)
SAD (sum of absolute distances)
NCC (normalized cross correlation)
How is the SSD calculated?
direct pixel correspondances from both patches (i.e. H and F) over all pixels in the patch…
!always > 0
How is the SAD calculated?
always >= 0
-> 0 only if H and F are identical…
What is NCC?
ranges between -1 and +1
is exactly 1 if H and F (patches/descriptors to compare) are identical
calculation:
sum over pixel-wise multiplicatoin
divided by
for both patches individual:
sqrt of sum over squared pixel values
What do we usually do before calculating the similarity measurement?
account for the difference in average intensity of two images
=> subtract mean value of each image
Why do we usually have to consider the average intensity in images for similarity measurementsß
changes in average illumination
due to additive illumination changes
How do we consider the average intensity of each image?
for each patch -> calculate the average intensity (sum over all pixel insentities divided by number of pixels)
-> introduce in our distance measurement by subtracting this average from the individual values (i.e. direct in the sum)
What is ZSSD?
zero mean sum of squared differences
-> i.e. mean corrected (zero mean) pixel values…
What properties should a descriptor have?
distinctiveness
robustness to geometric changes
robustness to illumination changes
What is meant with distinctiveness of a descriptor?
descriptor is “description” of pixel information around feature
distnictiveness
-> descriptor can be uniquely distinguish a feature from the other features
without ambiguity
What is meant with robustness to geometric changes?
scale invariant (for zooming)
rotation invariant
view point-invariant (for perspective changes)
What is meant with robustness to illumination changes?
small illumination changes can be modeled with affine transformation
-> so called affine illumination changes
What is the general pipeline fo traditinal patch feature descriptor?
detect keypoints (i.e. patch center)
use descriptor to warp patches into canonical space
determine scale rotation and viewpoint change of each patch to do this
establish patch correspondances based on similarity of the warped patch (i.e. in canonical space)
What is the pipeline for scale, rotation and affine-invariant patch matching? (i.e. transform to canonical space and match)
scale assignment
rotation assignment
view point assignment
wart patch to canonical space
How does the scale assignment work?
compute scale using LoG operator
How does the rotation assignment work?
use harris or gradient histogram
to find dominant orientation
How does view-point transformation work?
use harris eigenvectors to extract affine transformaiton parameters
Last changeda year ago