How is texture mapping performed on a tessellated (triangulated) surface, and how are out-of-range UVs handled?
Per-Vertex UVs: Each mesh vertex carries a texture coordinate (u,v) (assigned in modeling or by a function).
Barycentric Interpolation: During rasterization, compute each pixel’s (u,v) by interpolating the three vertices’ UVs with barycentric weights \alpha,\beta,\gamma.
Sampling: Use that per-pixel (u,v) to look up (and bilinearly filter) the texture.
Addressing Modes (when \(u,v\not\in[0,1]\)):
Repeat: wrap around (tile).
Mirror: reflect every other tile.
Clamp: clamp to edge texel.
Border: return a constant border color.
What problem is perspective correct textures solving and how does it so?
Interpolating texture coordinates in screenspace results in incorrect perspectives
Carry out correction through division by homogeneous coordinates
Describe the pipline from homogenous coordinates to fragments
How does perspective projection use homogeneous coordinates and what is perspective division?
What is a bump map (normal map) and why is it used?
What’s the main benefit of bump mapping?
A texture that encodes per-pixel normals instead of just color.
Used to fake small-scale surface detail by perturbing the shading normal, so lighting shows bumps and dents without extra geometry.
Adds rich surface detail (tiny bumps, grooves) in the lighting without increasing the mesh’s triangle count—high visual fidelity at low geometric cost.
How do you apply a bump map during rendering?
Sample the normal map at the fragment’s UV coordinate to get a “bump” normal.
Transform it from tangent space into world (or eye) space.
Use this perturbed normal in your lighting calculations (diffuse, specular) instead of the interpolated mesh normal.
What is a displacement map, and how does it differ from bump mapping?
A displacement map is a 2D texture that stores per-texel offsets along the surface normal.
Unlike bump/normal maps, which only perturb shading normals (no real geometry change), displacement maps actually move each surface point by the stored offset.
Effects:
Produces correct self-shadowing and silhouettes.
Requires modifying vertex positions (e.g. in a tessellation or height-map pass) or adjusting the Z-value in a software rasterizer.
Benefit: Real geometric detail at low memory cost; true depth and occlusion without extra modeling.
What is an environment map and how is it used to simulate reflections?
An environment map is a texture (e.g., a spherical or cubemap) that represents the surrounding scene.
For each shaded point a, compute the reflection vector b
Use b to index into the environment map (convert b into UV coordinates) to fetch the reflected color.
This fakes specular reflections of the entire environment (not just light sources) with minimal cost.
What is a cube map, how do you generate one, and how is it applied to render reflective objects?
For one cube, how many render passes do you need?
Cube Map: A form of environment map stored as six square textures—one for each face of a virtual cube surrounding the scene.
Generation: Place a camera at the environment’s center (with only background objects), render six 90°-FOV images looking along +X, –X, +Y, –Y, +Z, –Z, and store each in its corresponding cube face.
Application: For each reflective surface point, compute the world-space reflection vector r, determine which cube face r points into, and use its 2D (s,t) coordinates on that face to sample the cube map—automatically providing correct reflections (for the original capture point).
You need 6 render passes and then one independent of that one object to render the scene
How does the shadow mapping technique use projective textures to cast hard shadows?
d2 > d1
Can have more passes with more light sources
What is multi-pass rendering and how is it used to achieve complex effects?
Definition: Rendering the same scene multiple times off-screen (into intermediate textures), each pass computing a different effect (e.g. shadows, reflections, highlights, lighting passes).
How does mip mapping work?
For each texture multiple resolutions versions are pre-computed. Depending on the camera distance, the the appropriate level of detail is chosen for texturing
• if camera is close, high resolution textures
provide details
• if camera is far, low resolution textures are good
enough (fine details will not be visible anyway)
• this always provides optimal rendering
performance
Zuletzt geändertvor einem Monat