Feb 26 '16
Authors:

Deep Image

An image is the result of rendering. The rendering process is often seen as the main part of the work. Historically, that may have been true — getting the image is certainly the first stage, but once that job is done the result can be improved. Post processing is further work done after rendering. Images can be combined with other images or used as input in further rendering passes. For example, shadow mapping, deferred shading and a whole range of image space techniques such as depth of field, ambient occlusion and even further rendering called image based rendering.

Colour is still the desired end result, but for post processing and intermediate results more information can be very useful. For example, an image may store material information or surface normals to be used in further lighting calculations.

Regular, flat images have one value per pixel. A deep image on the other hand is still a 2D grid of pixels, but each pixel, or deep pixel, can have multiple and varying numbers of values (possibly zero). This provides more complete information for further processing. This should not be confused with a deep framebuffer, where all pixels have the same number of different attributes forming a value.

Color and depth form typical values in a deep image, providing a discretized geometry representation. With the original projection used to create the image the exact position of the surface samples are known. These surface samples have a direct relation to the concept of fragments from rasterization.

Deep images are increasingly being used in real-time graphics. The A-buffer has informally become a common term for GPU constructed deep images, and while there are similarities the original purpose was storing micro-polygons for antialiasing in the REYES system.

A deep image is significant in a rendering pipeline in that it provides a complete state where all fragments are available instead of processing and discarding them on the fly. Without this state, interactions between fragments are limited by one-at-a-time processing and, for rasterization, unknown rendering order. In particular for rasterization, complex “multi-fragment effects” such as transparency normally require geometry ordering and guaranteeing correct results for general cases is often impractically slow.

There are no comments yet.