Expolain what billboarding is
We use a texture polygon as a proxy that faces the camera perpendicular to the viewer. It combines alpha texturing with animations
What are imposters in real‐time rendering, and how do they work?
Definition: Imposters are 2D billboards (textured quads) that stand in for complex 3D models, updated only when needed.
Creation: Render the full 3D object once from a virtual camera located at the object’s bounding‐box center into a texture.
Usage:
Map that texture onto a screen-facing quad placed where the object belongs.
Re-use the imposter as long as camera-object movement causes acceptable distortion.
When distortion exceeds a threshold (or the object grows on-screen), re-render the 3D model to refresh the texture.
Benefit: Dramatically reduces geometry and shading cost for distant or less-important objects while maintaining visual fidelity.
What is a particle system, and how are particles typically represented in real‐time graphics?
A particle system is a collection of small, independent “particles” animated (often via physics) to simulate effects like smoke, fire, fountains, or explosions.
Particles can be rendered as:
Billboards: small textured quads that always face the camera (ideal for round sprites where orientation doesn’t matter).
Points/Lines: point‐based rendering or splats for very large counts.
Level of Detail: Particle textures can use MIP‐mapping or lower resolution when their screen footprint is small to save fill rate.
What are hard versus soft particles, and what advanced particle techniques exist?
Hard Particles: Standard billboards—can “seam” or slice through geometry because they don’t blend with depth.
Soft Particles: In the fragment shader, sample the scene’s depth buffer and fade particles as they near other geometry, eliminating hard intersections.
Advanced Types:
Volumetric Shadowed Particles: particles receive/ cast shadows, producing self‐shadowing within the volume.
Interactive Particles: integrate with fluid or physics simulations for realistic smoke, fire, and water that respond to forces and collisions.
How are glare effects created?
What causes lens flare?
What causes bloom?
They are rendered with impostors or billboards (multiple textures on different bilboard planes)
lens flaer: lens of the eye or camera with lens train when directed at bright light
bloom: Scattering of light in the lens or other parts of the eye or on the sensor camera sensor
What is a full-screen billboard?
What is a skybox?
A screen aligned billboard that covers the entire view which can change the appearance of the scene e.g. day/night
Skybox: Environment map that is actually rendered centered at the viewpoint in case cube maps are used
How can you simulate surface roughness using an environment map in real‐time rendering?
Pre-filter the EM: Blurring (pre-filtering) the environment map yields rougher, more diffuse reflections (“reflection mapping”).
Weighted sampling: Instead of a simple Gaussian blur, weight and sample EM texels according to your reflection distribution (e.g. a Phong lobe with exponent p) for each pixel’s reflection vector.
Result: Sharp, mirror-like reflections when roughness→0; broad, fuzzy highlights as roughness increases.
What is irradiance mapping, and how does it simplify diffuse lighting?
Irradiance mapping precomputes and stores the diffuse lighting (irradiance) arriving from every direction in an environment map.
At runtime, you sample this map using the surface normal to get the total diffuse contribution from the complex environment—no per‐vertex dot‐products or summations needed.
What are light maps, and how do they enable precomputed lighting in real‐time rendering?
Definition: Textures that store precomputed lighting (global illumination) for static scenes and objects, replacing expensive runtime radiosity.
Assumptions: Lighting and geometry are fixed—light transport is constant.
Storage:
View‐independent (diffuse) effects: store exitance/irradiance per‐texel in a light map.
View‐dependent effects (specular, normal‐map reflections): store irradiance + a dominant direction (or use an irradiance map), then reconstruct at runtime.
What are the elements of the transformation pipeline?
Scene
View
Projection
Perspective division
viewport
Describe the graphics pipeline
What’s the difference between clipping and culling, and what should you watch out for with backface culling?
Culling: Entire triangles that are back-facing or wholly outside the view frustum are discarded before rasterization. Often accelerated by testing simple bounding volumes (boxes or spheres) instead of every triangle.
Clipping: Triangles that straddle the frustum planes (i.e. partly inside, partly outside) are cut along the boundary, producing new, smaller triangles that fit entirely inside before projection.
Backface Culling Notes:
Works great for closed, manifold meshes—removes hidden backsides with zero cost.
Fails on open or two-sided geometry (you’ll see holes or missing faces).
Bad workarounds include duplicating triangles with flipped normals (causes Z-fighting) or disabling culling globally (hurts performance).
Better: Render closed surfaces with backface culling enabled first, then disable culling for any open or translucent geometry in a second pass
How does clipping work?
At which state of the pipeline would you carry it out?
Clipping removes parts of triangles that intersect the viewing frustum and are (partially) outside
After the transformation pipeline was applied to the triangles and they are in the camera coordinate system (projection)
Where can the lighting be applied?
In the geometry stage when computing the attributes (in traditional pipelines with simple per-vertex models like Gouraud shading)
Or in the rasterization stage when shading pixels (Phong shading -> trend in modern pipelines)
What are triangle strips and triangle fans, and why do they improve rendering performance?
Concept: Instead of submitting each triangle with three separate vertices, strips and fans share vertices between adjacent triangles.
Triangle Strip: Each new triangle reuses the previous two vertices plus one new vertex—so after the first triangle, you add just one vertex per triangle.
Triangle Fan: All triangles share a single “center” vertex; each new triangle is formed by pairing the center with the previous fan vertex and a new one.
Benefit: Dramatically reduces the number of vertices processed (up to 3× speed‐up in practice) by avoiding repeated transformation of the same coordinates.
What are preserved states (display lists) in graphics APIs, and how do they improve performance?
State Machine: Graphics APIs like OpenGL let you set material/shader/vertex attributes once and preserve them across multiple draw calls, avoiding redundant state changes per triangle.
Display Lists: A way to record static geometry and its state on the GPU (e.g., a set of triangles sharing attributes) so it need not be resent each frame.
Benefits:
Eliminates repeated attribute setup and bus transfers for static objects.
Enables driver/hardware to optimize vertex sharing and draw-call batching.
Can yield significant speedups when rendering unchanging geometry.
What’s the difference between fixed‐function and programmable graphics pipelines?
Fixed‐Function Pipelines (e.g. legacy OpenGL):
Built‐in stages for transforms, lighting, texturing, etc.
States (like matrices, material parameters) are global for each draw call and cannot vary per‐vertex or per‐fragment.
Limited flexibility but simple to use.
Programmable Pipelines:
Replace fixed stages with user‐written shaders (vertex, geometry, fragment, etc.).
Full control over per‐vertex and per‐fragment operations, including arbitrary state and texture lookups.
Much greater flexibility for advanced effects at the cost of more complexity.
How do encapsulated (hierarchical) transformations work for connected components (e.g. a pendulum)?
What is a scene graph, and how does it simplify hierarchical transforms?
Structure: A scene graph is a tree of nodes, each with a local transform M_{\text{local}} and zero or more children.
Matrix Stack Traversal:
Push the parent’s composite matrix.
Multiply by a child’s M_{\text{local}}.
Draw the child using the top of the stack as its world matrix.
Traverse grandchildren recursively.
Pop when done.
Benefit: Automatically maintains the correct parent–child transform relationships, letting you manage complex assemblies (cars, characters, etc.) with simple push/pop and draw calls.
Zuletzt geändertvor einem Monat