Visualization Pipeline stages + examples
data acquisition
sources: data bases, simulation, sensors
filtering and enhancement: process to obtain useful data
data format conversion
resampling to grid
interpolation/approximation of missing values
clipping/cleaning/denoising
visualization mapping: map derived data to graphical primitives (how to represent data?)
scalar field —> isosurface
vector field —> vectors
2D field —> height field
tensor field —> glyphs
3D field —> volume
rendering: generate 2D image/video
viewpoint specification
visibility calculation
lighting/shading
compute values at sampling points
Vizualization szenarios
Passive Vis
interactive vis
visual steering (change data)
5Vs in big data
velocity
volume
variety
value
veracity (Richtigkeit)
independent and dependent variables
R^n (ind.) —> R^m (dep.)
independent:
dimension of the domain of the problem
3D/2D space, time
dependent:
type of dimension of data
depends on domain (independent vaiable)
temperature, velocity, density values, vectors)
Data types
categorical data
scalar value (given by function R^m —> R)
vector data ("2D vector field)
tesor data (multi dim matrix)
Grids
cartesian grid
dx = dy
stored information (get grid points by): cell indices + origin
uniform/regular grid
dx !=dy
rectilinnear grid
Varying sample distances
curvilinar grid
non orthogonal grid
specify grid points!
but still inplicitly given neighbors
unstructured grid
gridpoints + neighborhood has to be specified ecplicity
scattered data
drawbacks of radial basis function and inverse distance weighting
radial basis function:
points have global influence
for each new sample point have to resolve equation system
computationally expensive (inear eq. system)
inverse distance weighting
no system of linear equations to find weights
but still global influcence of points
still computationally expensive
—> triangulation instead of continous functions for interpoltion
triangulation
connect sample point to triangles —> triangulation
piecewise interpolate inside triangles
good triangualation:
maximize minimum angle: radius-in-circle/radius-out-circle
delauny triangulation
out-circle does not contain other points of the set
build form non-oprimatl Delauny by local operations: Edge flipping
vonoroi
each vonoroi region contains one initial sample point (sample)
point in vonoroi region are closer to respective sample than to any other sample
vonoroi samples are vertives in delauny trinagulation
vonoroi vertives are midpoint of delauny triangulation
Hogh qualitiy reconstructions
sinc function
optimal reconstruction but very expensive
bicubic interpoplation (Catmull-Rom spline)
mimics sinc
Cubic B-spline
does not go through center points —-> approximation
trillinar interpolation (tent (function look like tent))
fast but low quality
isoline
= isocontour
all points at which data has specifc value
always closed curves
are nested (not (self)intersecting)
orthogonal to scalar fields gradient
compute via
marching square /cubes
Marching squares (MS) algorithm
16 different sign (+/-) configuarations - 4 base cases
only consider cellls having differnet signs
ambigous case:
midpoint decider (f_center >= c —> x
doe not work if f_center and delta are in diffenent sign zones
exact assympotic decider (delta >= c —> +
Marching cubes (MC) algorithm
256 cases (2^8 , 15 base cases)
ambiguity in case 3,6,7,10,12,13 —> use decider like in 2D
can cause holes if abitrary choices are made
up to 5 triangles per cube
phongs illumination model
ambient light
background light, contant
diffuse refeflector
scatter light equally in all directions
specular reflector
highlight = refelction of light source ,refelcts mostly in mirrir direction
if n increases —> intensity decreases, shininess increases
gradient approximation (1D)
forward differences
needs one extra point
see underlying grid structure
central differences
needs two extra points —> but more smooth —> preferred
sobel operator —> better approx. scheme
Volume rendering techniques
indirect
convert volume data to intermediate representation whcih then can be rendered with traditional techniques (MC algo)
conveys surface impression
direct
drecty get 3D representation from data
conveys volume impression
transfer function
map data values to visual properties
scalr value —> color + alpha value
color C
opacity alpha
—> Color quadrupel (RGBalpha)
DRV
Direct volue rendering
optical properties are mapped to each voxel (emission = color, absorption = opacity) —> emission/absorption model
the light reaching the viewer is simualted by ray casting
compositing schemes
front to back(alpha compositing)
average compositiong
produces xray image, does not acount for opactiy
maximum intenstity projection MIP
does not acount for opactiy
good for magnetic ressonance
vessel structure extraction
fist hit /surface rendering
same result as marching cubes but higher quality
sampling artifacts
due to too samples along the ray
—> increas sampling rate to Nyquist frequency
>= 2 samples per voxel
classification - flow visualization
dimension(2D, 3D)
time dependency (steady vs time-varying)
direct vs indirect
flow visualization - approaches
Direct flow vis
arrows (glyphs), color coding
geometrical (indirect flow vis)
stream lines, path lines, treak lines, surfaces
show particle movement along trajcetories
sparse
feature based (critical points, sink, source, saddle, ..)
dense( texture-based)
convolution along chaarcteristic lines
flexibility: from fuzzy to crisp
LIC (global, L increases —> smoother)
OLIC (not globa)
3D LIC
Disadvantage of arrow glyphs in flow vis
inherent occlusion effects
ambiguity
difficult spatial perception (1 D objects in 3D)
characteristic lines (indirect geometrical flow vis)
are tangential to the flow
1st derivative (gradient) points in vector field direction
do not intersect
are solution to initial value problem of ODE
Numerical Integration of ODEs
Euler method (first order method)
Runge kutta 2nd order / 4th order
Evenly spaed streamlines
stream linkes should not get to close to each other
choose seed point with distance d-sep from existing stream line
forward and backward integration until d_test reached
dsep increasing —> lines are narrow
dtest increasing —> shorter stream lines
Visual Mappings
graphical primatives
points, lines, areas, surfaces
visual channels (appearence of graphical primatives)
color
position
shape
slope tilt
size
Effectiveness principle
Propertiy of visual channels
Pop out (preattentive processing)
auomatic, parallel detection of basic features (250 ms)
discriminability
How many usable steps (like bins)
seperability
seperable vs integral channels
seperable: able to judge visual channels independently
integral: channels are viewd holistically (ganzheitlich)
Relative vs absolute judgement
perception highly context depenent
Webers Law
e.g color, lenght, size
Diagram techniques
Categorial + Quantitative Data
Bar, Pie,
stacked bar (quantitative wrt 2 categorical vars)
parallel sets (quantitative wrt multiple categorical vars)
Time dependent data
line graph (most accurate angel judgement: 45 °)
Theme river
categories over time,
occurence/freq of topic/catagorical data mapped to width of river band
horizon graphs
reduce vertival space
single + multiple varibles
histgram (binning, group value sinto equally spaced intervals)
boxplot —-> summary stats (median, max, min interquatile range)
variations: turkey’s, Trufte’s
Scatter plot
scatterplot matrix
all attribute combinations —> overview of correlations
parallel coordinates
radar chart (radial axis arrangement, items are polylines)
function plots
Glyphs, color mapping
glyphs
small, independent visual onjects that depict attributes of a data record
2D, 3D, surface
star glyphs
equally spaced spikes
lenght of spike represents attribute value
end of spikes connected
stick figures
data encoded by lenght, angle, thickness
recognize texture patterns
chernoff faces
channels of visual effects of chromatic light
Hue: dominant wavelenght
saturation: pureness, amamount of white light
luminance / brightness, intensity of light
Perceptual linear
equal steps in color map should be perceives equally
(rainbow NOT)
Perceptual ordering
irdering of data should be represented by ordering of colors
differnetly colored maps
visual analysis goals
visual exploration
confirmative analysis/ visual analysis
presentation
multi facted scientific data
spatiotemporal data
time dependent
multi-variate data
attribute views
volume rendering
clustering, dimensionality reduction
multi-modal data
carious types of grids with different resolutions
visual data fusion
comparative vis
muti-run data
multi-model data
couples climate models
Fusion within a single visualization
use a common frame (axes)
Layering techniques (glyphs, color, transparency)
multi volume rendering (segmentation)
How are visaization, interaction and comp. analayis combined?
relation and comparison, comparative visualization taxonomy
side by side composition (juxtaposition)
overlay in same coordiante system (superposition)
explicit encoding of differences
navigation
change item visabiility
zoom, pan, rotate
focus + context (f+c)
overview + detail (o+d)
DBSCAN
Dimensionality reduction
true dimensionality of dataset assumed to be smaller than dim of meassurements
PCA
find direction if largest variance
coordinate system transformation
Properties of divergence
Classification of critical points
How can multivariate data be encoded in spatial context?
glyphs, laxering, feature-based
Visual data fusion intermixes data in a single visualization using a common frame of reference. Give at least two general approaches.
layering (glyphs, color, transparency)
multi-volume rendering
common frame by shared axes
What are three general approaches for comparative visualization
(according to the taxonomy of Gleicher et al. 2011)?
juxtrapostion: side by side
superposition: overlay
What is focus+context visualization? Explain the general approach.
How is it different from an overview+detail visualization?
show focus/important information and context information together
overview + detail: seperates focus and context information
Give at least three examples of visual channels (graphical resources) that can be used for
focus-context discrimination.
color, zooming, transparency
Give two examples for focus+context visualization techniques which use spatial distortion.
fish eye lense
Perspective wall
What is the main idea in dimensionality reduction? Name one example method? How does it work?
reduce the number of features or variables in a dataset while retaining as much useful information as possible
find interesting features in lower dimensionality easier
PCA:
PCA works by identifying the linear combinations of the original variables that explain the most variance in the data. These combinations are called principal components, and they can be used as new variables that capture most of the information in the original dataset. PCA then discards the least important principal components, effectively reducing the dimensionality of the dataset.
Principal component analysis transforms data from a cartesian coordinate system into
another coordinate system. Why is it then still considered a dimensionality reduction method?
pca reduces the number of variables or features needed to represent the data while retaining most of the information in the original dataset.
Zuletzt geändertvor 2 Jahren