Jul 4 at 5:11
Authors:

3D Rendering (Computer Graphics)

Rendering in 3D computer graphics is computing a 2D image from a 3D model. This computation is physically based, to varying degree, but commonly models a camera and some form of lighting effects to be somewhat realistic and create more relatable and familiar images, even in cases of non-photorealistic rendering. To compute an image, virtual descriptions of objects or light interacting material (geometry) is lit and a virtual camera defines a projection for the final image to be rendered. The light contribution from the scene is then computed for each pixel.

There are many rays to render but two dominant approaches are ray tracing and rasterization, and sometimes a combination. When ray tracing, possible light paths from pixels are projected through the camera and into the scene, hitting geometry and ultimately searching for paths to light sources. Rasterization is an efficient way to compute the first pixel–scene intersections by projecting geometry into the image. E.g. place triangles over the image pixels and compute lighting equations for the ones inside. Both have their own challenges and both are accelerated in hardware with GPUs. Ray tracing approaches need an acceleration structure to be fast, which needs re-building for animation, and typically need complex sampling and denoising techniques. Rasterization approaches need occlusion culling to be fast, and complex approaches for lighting effects such as shadow maps and grid or probe based indirect lighting. There are also photon mapping and radiosity techniques that trace light paths from the light into the scene for more efficient secondary lighting effects such as indirect illumination, volumetric lighting and caustics.

If you’re looking to get started coding, OpenGL is an easier introduction to rasterization. Ray tracing can be hardware accelerated with Vulkan and DirectX APIs. Also NVIDIA OptiX. Higher level renering engines may be an option too.

A number of methods exist to describe the geometry of 3D objects but the most widely used in real-time applications is polygons. In particular triangles, forming triangle meshes, are simple and have some convenient attributes. Other object descriptions include point clouds, 3D gaussians, lines, voxels, surfels, higher order geometry such as freeform surfaces like NURB, Bézier, subdivision patches, isosurfaces, neural representations such as NeRF and triplane.

Light sources emit light and may be just as complex as any other 3D geometry in the scene. For example, geometry surfaces may be defined to glow with an emissive attribute, or the gaseous volume of a flame may emit light. Indirect light bouncing off geometry in the scene may be considered a light source. Point light sources are faster to compute. To light geometry it also needs materials that define how light interacts with its surfaces or volume before entering the virtual camera. Finally we get to the rendering equation which is more of a stage in computation to modularize light–surface interaction. A material is often a statistical approximation of some underlying surface geometry with microfacets.

The virtual camera must define a projection, which creates a mapping between the virtual 3D world and 2D image coordinates. A position and orientation in relation to objects in the scene is common. These transformations often cancel down and it can be surprising at first to find there is no concept of a camera in graphics APIs like OpenGL.

The following pages continue these concepts in more detail.

  • Geometry (Triangle Meshes, etc.)
  • Cameras (Model, View, Projection Matrices)
  • Pixel–Geometry Intersection Testing
    • Rasterization
    • Ray Tracing
  • Shading (Lights and Materials)