While I was in San Francisco this summer I went to the SFMOMA to see the Richard Avedon retrospective and caught this delightful little crayon piece in another gallery.

Peter Young - Untitled (1968)

Peter Young - Untitled (1968)

If you work in computer graphics this drawing just screams blue noise sampling pattern. So I thought I’d take the opportunity to talk a little bit about Don Mitchell‘s original blue noise sampling paper and the original rhesus monkey work where some of the seeds of this idea came from. First I’ll go over some sampling basics.

When rendering a picture of a 3d image, we take many point samples of the scene. For instance, what is the color of the image that ought to be generated at point (x,y) on the screen? By collecting many of these point samples (through, say, raytracing), we’re able to “reconstruct” the image. The simplest scheme we could follow would be to take a single sample per pixel, at the center of the pixel, and use the result for the color of that pixel in our final image.

regular sampling ray tracing

We call this process regular sampling since the samples are taken at regular intervals on a grid. Regular sampling can lead to strange aliasing artifacts, whose existence can be predicted via Fourier analysis—a topic I’d rather not get into right now. However, the two most common visual aliasing artifacts are easily recognized: jaggies and moire patterns.

Aliasing_a colormoire

In order to avoid aliasing artifacts, we supersample the image. In the case of regular sampling, this just means increasing the resolution of the grid. Unfortunately, some artifacts may require very high resolutions to remove, requiring an unreasonable number of samples. As a result, some authors(look up who?) suggested jittering the sample points to disrupt the regularity of the sampling pattern, or just choosing sample points totally at random (within a pixel of course).

(left to right) single regular sample; 16x regular supersampling; 16x jittered supersampling; 16 random samples

(left to right) single regular sample; 16x regular supersampling; 16x jittered supersampling; 16 random samples

Although these techniques often avoid aliasing artifacts, they result in noisy images. Since randomness goes hand in hand with statistics, the noise is usually attributed to variance. Just like aliasing artifacts, noise can be reduced by taking more samples, thereby shrinking the variance.

Blue noise sampling patterns are a way to split the difference between aliasing error and noise error. Plus, they’re grounded in biological observations about our vision system. Wow! Qualitatively, blue noise distributions can be characterized as even density, isotropic point distributions.

retina_pattern

By even density, I mean that the number of points in a sufficiently large box should be approximately proportional to the size of the box, regardless of where I place it over the point distribution. And by isotropic, I mean that the distribution of angles between any two random points should be uniformly distributed. For instance, a regular sampling pattern (box grid or hexagonal grid) has even density but is strongly anisotropic.

uniform_density isotropy

Lo and behold, in 1983, John Yellot looked at the distribution of photoreceptors on a rhesus monkey retina and saw this sort of a pattern. He speculated that

This sampling scheme allows the visual system to escape aliasing distortion despite a large mismatch between retinal image bandwidth and the Nyquist limits implied by extrafoveal cone densities.

Which is to say that blue noise sampling avoids aliasing artifacts, even though it’s bandlimited, even though we are taking far too few samples. In 1987, Don Mitchell followed up on Yellot’s suggestion that “sampling arrays constructed on this basis might be useful in artificial image recording systems,” coining the terminology blue-noise pattern to describe sampling patterns with these “nice” spectral characteristics.

rhesus monkey retina in the central fovea
rhesus monkey retina in the central fovea

And now some notes for literature lovers, proceeding pigs and citation cops: Cook cites Yellot in his 1986 ToG paper Stochastic Sampling in Computer Graphics as do Dippé and Wold in their 1985 SIGGRAPH paper Antialiasing through Stochastic Sampling. Dippé and Wold were the first to introduce Poisson-disk sampling and Yellot’s work into the graphics community, but Don Mitchell’s 1987 paper coined the name “blue noise” in order to generalize the qualitative properties of Poisson-disk distributions.  Don Mitchell was actually proposing a non-Poisson-disk pattern (Poisson-disk patterns were prohibitively expensive to generate at this point in time) but he needed to emphasize the qualitative and empirical similarity of his approach.  Many papers now cite Mitchell, but not Dippé and Wold, which just goes to show you the power of names!

Advertisements