Sep 1 '21
This is an older version of the article. Click here to see the current one.

My first graphics demos

This is an unsorted collection of graphics demos I wrote years ago. This was back when I first stumbled upon shaders. I think university courses took a back seat this year. The website I had hosted at the university since went down so I’m dumping some of the contents here.

SPH fluid

Computers were slower back then. These were all using uniform grid datastructures. The rendering was what I was really excited to get to, but of course I spent too much time tweaking simulation constants and trying to make the simulation feel bigger.

enter image description here

enter image description here

enter image description here

Some videos: https://www.youtube.com/playlist?list=PLjhj5hnWysbTUPbnlwbP1abLN0TQBws6w

Metaballs

My first experience with geometry shaders and SSBOs (marching cube tables could not fit in constant memory at the time). It’s just rendering a 3D grid of points and the geometry shader expands that into triangles. I later used this technique for quick terrain chunk generation, using the transform feedback extension. The texture is planar mapped and blended based on the normal.

enter image description here

Parallax mapping

This is where it all started for me. Someone showed me the Oliveira and Policarpo papers and decided I wanted to implement it. It took a few of these other demos before I started to understand the maths but I got there in the end. It was the first taste of outside the box thinking that realtime rendering can be done a different way (ok, so my knowledge of graphics history was limited back then too, what with only knowing about the mainstream gpu + raster triangles).

enter image description here

enter image description here

Particle systems

enter image description here

enter image description here

Depth peeling

There’s a good chance this screenshot is actually full Order Independent Transparency. See the research section here. Although I did implement depth peeling as a comparison at one stage. It’s just so much faster when you have per-pixel atomics.

enter image description here

Interior mapping

enter image description here

Irregular shadow maps

Perfect per-pixel hard shadows. Normal shadow mapping needs to pre-compute surface depth from the perspective of the light source. This is often inefficient as the frequency of the samples never perfectly matches what the camera is looking at. The irregular z-buffer stores arbitrary points in a uniform grid rather than a single depth per grid, like a depth buffer. Irregular shadow maps compute depth for each camera pixel rather than arbitrarily on the regular grid of a shadow map.

enter image description here

Irregular soft shadows

Taking this a step further I realised you could do it the other way around and store lists of triangles per-pixel. Then you could compute the overlap of the triangle and a spherical light source for soft shadows!!! It works well until you have multiple occluders (that’s why the screenshot is so simple).

enter image description here

Depth of field

Depth of field from a single image is hard to do well. Maybe these days with some deep learning it could go ok. To be physically accurate you need information from hidden surfaces, potentially full-screen circular blurs and HDR. This is a full 2D kernel blur based on a difference in depth. It was slow and of course something always looks wrong.

enter image description here

enter image description here

HDR + Bloom

enter image description here