Mar 30 '16
Authors:

3D Rendering (Computer Graphics)

Rendering in 3D computer graphics is computing a 2D image from a 3D model. This computation is physically based, to varying degree, but commonly models a camera and some form of lighting effects to be somewhat realistic and create more relatable and familiar images, even in cases of non-photorealistic rendering.

To compute an image, virtual descriptions of objects or light interacting material is lit and a virtual camera defines a projection for the final image to be rendered. The light contribution from the scene is then computed for each pixel. For efficiency, this can be partially done forwards or backwards. When raytracing, possible light paths from pixels are projected through the camera and into the scene, often computing secondary light bounces too. Rasterization is an efficient way to compute the first pixel–scene intersections by projecting geometry into the image. This first intersection is fundamental to all rendering methods. Photon mapping and radiosity trace light paths from the light and into the scene for more efficient secondary lighting effects such as indirect illumination, volumetric lighting and caustics. Shadow mapping and reflective shadow maps are similarly computed from the light’s point of view.

A number of methods exist to describe the geometry of 3D objects but the most widely used in real-time applications is polygons. In particular triangles, forming triangle meshes, are simple and have some convenient attributes. Other object descriptions include point clouds, lines, voxels, and higher order geometry such as freeform surfaces and isosurfaces.

Light sources emit light and may be just as complex as any other 3D geometry in the scene. For example, geometry surfaces may be defined to glow, or the gaseous volume of a flame may emit light. Indirect light bouncing off geometry in the scene may be considered a light source. However, light sources are typically simpler, such as points, for faster computation. To light geometry it also needs materials that define how light interacts with its surfaces or volume before entering the virtual camera.

Primarily, the virtual camera must define a projection, which creates a mapping between the virtual 3D world and 2D image coordinates. A position and orientation in relation to objects in the scene are also common.

The following pages continue these concepts in more detail.

  • Geometry (Triangle Meshes, etc.)
  • Cameras (Model, View, Projection Matrices)
  • Pixel–Geometry Intersection Testing
    • Rasterization
    • Ray Tracing
  • Shading (Lights and Materials)