What is ray tracing primarily used for in computer graphics?
Why is ray tracing commonly used in the movie industry but not in video games?
How does ray tracing differ from rasterization in terms of rendering order?
Creating (photo-)realistic images.
Because movie rendering doesn’t require real-time performance, unlike games.
Ray tracing works pixel-by-pixel, while rasterization works primitive-by-primitive.
What is the complexity of primitive-by-primitive projection?
What is the complexity for raytracig?
0(M) where M are the amount of triangles os linear increasing with the amount of triangles
O(N) where N is the amount of pixels (in case of more efficient datastrcutures turns into sublinear complexity initially it is O(N*M))
What happens for each pixel during ray tracing?
What advantage does ray tracing offer regarding parallel computation?
What does “modulation with different objects is accumulated” mean in the context of ray tracing?
A ray is traced through the scene, and interactions with objects modulate the final pixel value.
It can be easily parallelized across many machines, such as in render farms, because every ray computation is independent and can be done in isolation.
It refers to collecting the effects of light interactions (reflection, refraction, shadows) along a ray’s path.
How can we define local shading?
What is global shading?
If the shading parameters for each surface point are computed directly only with respect to the light sources (not considering indirect illumination), then this is called local shading
Effects that consider indirect lighting like inter-reflextion or scattering. Done via e.g. ray tracing or radiosity
What distinguishes ray casting from ray tracing in terms of ray types and shading effects?
Ray casting only uses primary rays and applies a local shading model, while ray tracing includes secondary rays, enabling effects like reflections, refractions, and shadows.
Why is ray casting simpler and less computationally intensive than ray tracing?
It avoids complex light interactions, does not require depth buffers or rasterization, and stops after the first intersection.
How are rays generated and used in ray tracing to compute the final image?
Rays are cast from the camera through pixel centers on the viewing plane
Each ray intersects scene geometry (typically triangles)
Surface properties (color, normal, transparency, etc.) are gathered
Secondary rays may be spawned (e.g., reflection, refraction)
Modulation info is accumulated along the ray path
Final ray properties determine the pixel’s color
Repeats until exit condition (e.g., max depth or no intersection)
What distinguishes primary and secondary rays in ray tracing, and why is this distinction important?
Primary rays: cast from the camera through pixel centers
Secondary rays: created after primary rays hit surfaces
Secondary rays enable:
Reflections
Refractions
Shadows
Essential for realistic lighting and global illumination
Why is ray tracing highly parallelizable, and how does this benefit rendering?
Each pixel’s ray is independent of others
No data dependencies between rays
Enables:
Parallel processing on GPUs or CPUs
Efficient use of render farms
Scalable rendering performance for high-quality scenes
How is a viewing ray mathematically expressed in ray tracing?
Using the parametric line equation: p(t) = e + t(s - e)
e: eye (camera) position (origin of the ray)
s: point on the screen (viewing plane)
t ∈ [0, ∞): position along the ray; t < 0 implies ray is behind the eye
How can the screen point s be computed for viewing rays?
Use transformation matrices (view and projection)
Two options:
Define s in camera coordinates, then apply inverse view transform to world
Apply view transform to the scene first, then ray trace in camera coordinates
In both cases, camera origin e = (0, 0, 0) in camera space
How are ray-sphere intersections computed in ray tracing?
How can barycentric coordinates be used for ray-triangle intersection?
What validation steps are required after solving a ray-triangle intersection?
Why are spheres commonly used in ray-object intersection tests, even if not all objects are spherical?
Used as bounding volumes to quickly eliminate rays that won’t hit the object
Enables fast pre-intersection tests
Reduces the number of expensive triangle intersection checks
Helps accelerate rendering in complex scenes
How are hard shadows generated in ray tracing?
Cast one secondary shadow ray from surface point to light source
If ray intersects another object → point is in shadow
If not intersected → point is illuminated
Produces hard-edged shadows
Same concept is used in shadow mapping
What are soft shadows and how are they produced?
Caused by area light sources, not point lights
Region splits into:
Umbra: fully shadowed
Penumbra: partially shadowed
Achieved by casting multiple shadow rays in random directions
Varying occlusion of rays determines shadow intensity
How is shadow intensity determined when using multiple shadow rays?
Rays from core shadow regions always intersect → fully dark
Rays from illuminated regions never intersect → fully lit
Rays from penumbra partially intersect → partial shading
Intensity = proportion of rays blocked vs unblocked
How to create soft shadows in the case of shadow mapping?
Soft shadows in shadow mapping are approximated using filtering techniques like PCF or PCSS. True area light behavior like in ray tracing is harder to achieve but can be approximated using multi-sampling or variance-based methods.
How is reflection computed in ray tracing?
When a ray hits a reflective surface, compute a reflection ray
Use the surface normal at the intersection point
Apply law of reflection: incoming angle = outgoing angle
Trace the reflection ray recursively for multi-reflections
Each reflected ray contributes to the final pixel color
How does ray-traced reflection differ from environment/cube mapping?
Cube mapping:
Fast approximation used in real-time rendering
Accurate only at the cube map’s center
Ray tracing:
Computes true view-dependent reflections
Correct for any viewpoint and surface
Captures complex interreflections
How is refraction handled in ray tracing?
If a ray hits a transparent surface, a refraction ray is computed
Requires:
Surface normal at the intersection point
Material’s refraction index (via Snell’s Law)
The refracted ray bends based on material properties
This process can be repeated for multi-refractions (e.g., glass within glass)
What is chromatic aberration and how is it handled in ray tracing?
Refers to different wavelengths (colors) bending differently
Caused by wavelength-dependent refractive indices
Leads to color fringing at edges
To simulate it, trace separate refraction rays per color channel (e.g., R, G, B)
What is distribution ray tracing and what rendering effects does it enable?
Shoots multiple randomly sampled rays per pixel
Increases realism and reduces artifacts
Supports:
Soft shadows (via many shadow rays to area light)
Glossy reflections (by sampling reflection directions)
Depth of field (by varying viewing ray origins across eye aperture)
Antialiasing (by sampling multiple rays per pixel)
When thinking of raytracing as a huge graph, how can I think of depth and width?
Depth: The number of traces that you do for way through your scene -> more through scene leads to higher depth
Width: You get wider trees when doing distribution raytracing. So it means the number of rays spawned at an intersection in form of child nodes. (e.g. one intersection can spawn one reflextion and 4 shadow rays)
How is depth of field simulated using distribution ray tracing?
Define a focal plane in the scene
Shoot multiple viewing rays per pixel through this plane
Randomize ray origins across a small eye aperture area
Results in:
Sharp focus at the focal plane
Blurring for objects in front/behind
Realistic optical depth-of-field effect
What is (bidirectional) path tracing and what global illumination effects does it support?
Path tracing:
Shoots secondary rays at diffuse surfaces
Simulates global illumination like:
Color bleeding (light bouncing with color tint)
Caustics (focused light patterns)
Bidirectional path tracing:
Shoots rays from both eye and light source
More accurate and efficient than basic path tracing
Related method: Photon Mapping
Limitations:
Diffuse effects need many rays to converge
Alternatives like Radiosity may work better for purely diffuse scenes
What are two main strategies to speed up ray tracing?
Parallel processing (distributes workload):
GPUs, multi-core CPUs, render farms, RPUs
Optimized data structures (reduce intersection tests):
Object partitioning (e.g. BVH)
Space partitioning (e.g. BSP trees, Kd-trees)
Mixed approaches (e.g. BIH)
What is the advantage of using bounding volumes in ray tracing?
Avoids testing rays against every object
Reduces complexity from O(N) to sub-linear
Bounding boxes tested first; only test contained triangles if ray intersects
Can be:
Axis-aligned (faster, simpler)
Oriented (more accurate but slightly costlier)
How do hierarchical bounding boxes (HBBs) improve efficiency?
Use tree structures of nested bounding boxes
Top: large boxes group subregions
Leaf nodes: individual object bounds
Ray traverses tree, testing fewer objects
Fast to build, often unbalanced, but efficient in practice
What is space subdivision in ray tracing, and how does it accelerate ray-object intersection?
A space partitioning technique to avoid testing all triangles
Uniform subdivision:
Divides space into a regular 3D grid of cells
Rays are tested cell-by-cell, reducing tests to local geometry
Non-uniform subdivision:
Uses BSP trees (Binary Space Partitioning)
Scene is split recursively along axis-aligned planes
Especially efficient in the form of Kd-trees (K=3)
Inside each cell, ray traversal is limited to entry/exit points, improving performance
Zuletzt geändertvor 10 Tagen