So it's easy to interpolate d across a triangle while rasterizing, and things like hierarchical Z-buffers, early Z-culling, and depth buffer compression are all a lot easier to do.Įquations are hard let's look at some pictures! The second reason is that 1/z is linear in screen space, as noted by Emil Persson. The real power in this approach, of course, is that the projection matrix can be multiplied with other matrices, allowing you to combine many transformation stages together in one.
We can generate linear remappings of 1/z by taking advantage of the perspective divide that the hardware already performs: This is the most general class of transformation that is guaranteed to preserve straight lines-which makes it convenient for hardware rasterization, since straight edges of triangles stay straight in screen space. So why this particular choice? There are two main reasons.įirst, 1/z fits naturally into the framework of perspective projections. On the face of it, you can imagine taking d to be any function of z you like. In other words, d is always some linear remapping of 1/z. Where a,b are constants related to the near and far plane settings. In general, the relationship between them is of the form distance along the view axis, in world units such as meters. In this article, I'll use d to represent the value stored in the depth buffer (in ), and z to represent world-space depth, i.e. I want to briefly motivate this convention. Instead, the depth buffer stores a value proportional to the reciprocal of world-space depth. GPU hardware depth buffers don't typically store a linear representation of the distance an object lies in front of the camera, contrary to what one might naïvely expect when encountering this for the first time.
The third part is a discussion and reproduction of the main results of Tightening the Precision of Perspective Rendering by Paul Upchurch and Mathieu Desbrun (2012), concerning the effects of floating-point roundoff error on depth precision. Second, I present some diagrams to help understand how nonlinear depth mapping works in different situations, intuitively and visually. In the first part, I try to provide some motivation for nonlinear depth mapping. To get an intuition for how it works, it's helpful to draw some pictures. Many articles and papers have been written on the topic, and a variety of different depth buffer formats and setups are found across different games, engines, and devices.īecause of the way it interacts with perspective projection, GPU hardware depth mapping is a little recondite and studying the equations may not make things immediately obvious. Depth precision is a pain in the ass that every graphics programmer has to struggle with sooner or later.