Babbage | Computational photography

Candid camera

A new approach to digital photography is overturning centuries of optical received wisdom

By G.F. | SEATTLE

PHOTOGRAPHY can trace its roots to the camera obscura, the optical principles of which were understood as early as the 5th century BC. Latin for a darkened chamber, it was just that: a shrouded box or room with a pinhole at one end through which light from the outside was projected onto a screen inside, displaying an inverted image. This, you might think, is a world away from modern digital cameras, brimming with fancy electronics which capture the wavelengths and intensity of light and translate them into digital bits. But the principle of focusing rays through an aperture onto a two-dimensional surface remains the same.

Now a novel approach to photographic imaging is making its way into cameras and smartphones. Computational photography, a subdiscipline of computer graphics, conjures up images rather than simply capturing them. More computer animation than pinhole camera, in other words, though using real light refracted through a lens rather than the virtual sort. The basic premise is to use multiple exposures, and even multiple lenses, to capture information from which photographs may be derived. These data contain a raft of potential pictures which software then converts into what, at first blush, looks like a conventional photo.

The best known example of computational photography is high-dynamic-range (HDR) imaging, which combines multiple photos shot in rapid succession, and at different exposures, into one picture of superior quality. So, where a single snap may miss out on detail in the lightest and darkest areas, an HDR image of the same scene looks preternaturally well lit. HDR was considered a specialised technique employed mostly by professionals. That changed when Apple added it as an option in the iPhone 4. (Earlier iPhone models lacked the oomph to crunch relevant data quickly enough to be practical.)

But HDR is just one way to splice together different images of the same subject, says Marc Levoy of Stanford University, who kickstarted the field in a seminal paper he and colleague Pat Hanrahan published in 1996. Since then, aspects of computational photography have moved from academia into commercial products. This, Dr Levoy explains, is mainly down to processing capacity of devices, such as camera-equipped smartphones, growing faster than the quantity of sensors which record light data. "You are getting more computing power per pixel."

To show off the potential of some new techniques, Dr Levoy programmed the SynthCam app for the iPhone and iOS devices, which takes a number of successive video frames and processes them into a single, static image that improves on the original in a variety of ways. He and his colleagues have also built several models of Frankencamera: prototypes made using bits of kit found in commercially available devices which use a host of tricks to capture data and turn them into better pictures with clever algorithms. SynthCam and Frankencameras can improve photos taken in low-light conditions, which are usually quite grainy, and create an artificial focus that is absent from the original set of images.

Still, for all the superior results and techniques that computational photography may reveal, Dr Levoy laments, camera-makers have been loth to embrace the new approach. This is poised to change. On June 22nd Ren Ng, a former student of his at Stanford, launched a new company called Lytro, promising to launch an affordable snapshot camera this autumn.

Rather than use conventional technology, as the Frankencamera does, to take multiple successive exposures and then meld them, Dr Ng has figured out a way to capture lots of images simultaneously. This approach is known as light-field photography, and Lytro's camera will be its first commercial exploration. In physics, a light field describes the direction of all the idealised light rays passing through an area. Dr Levoy and Dr Hanrahan's 1996 paper described a way of simplifying this field mathematically which makes it feasible—albeit nearly 15 years later—to calculate using off-the-shelf chips. Dr Ng's camera recreates the light field thanks to an array of microlenses inserted in between an ordinary camera lens and the image sensor. (Dr Ng declined to reveal the precise specifications for the commercial device, but prototypes from his academic days sported 90,000 minuscule lenses, arranged in a 300-by-300 grid.)

Each microlens functions as a kind of superpixel. A typical camera works by recording where light strikes the focal plane—the area onto which rays passing through a lens are captured. In traditional cameras the focal plane was a piece of film; modern ones use arrays of digital sensors. In Lytro's case, however, light first passes through a microlens and only then hits the sensors behind it. By calculating the path between the lens and the sensor, the precise direction of a light ray can be reconstructed. This in turn means that it is possible to determine where the ray would strike if the focal plane were moved. And moving the focal plane is tantamount to focusing the lens. In other words, one can find the focal plane on which even the blurriest point on the original image is in focus.

This ray tracing, as Dr Ng calls it, derives directly from computer graphics. In that field, the technique is used to paint realistic reflections of one artificial object on another, among other things. With Lytro's device, the objects are real, but the principle remains the same. A viewer can adjust the focus of an image at will simply by clicking on a point to bring it into sharp focus and blurring the rest of the photo. The same data can be used to alter the depth of field, as photographers call the space between the closest and most distant points in an image that are in focus, or even create a so-called infinite depth of field where every point of the image is in focus. Manipulating the data can also be used to slide the point of view of the camera around to produce a compelling simulation of a stereoscopic image, shallower but similar to 3D movies. (The company's website lets visitors fiddle with existing images to see how some of these features work.)

The main lens is fixed in place; there is no auto-focus, auto-aperture, or other gubbins. This limits the number of moving parts which need to be adjusted every time a photo is taken, and which cause a lag between pressing the shutter-release button and capturing the image. Lytro's snaps, by contrast, will be truly instantaneous, just like old film-based snapshot cameras. The light-field approach means they will always be in focus (since the plane of focus can be moved at will after the photograph has been taken). And the main lens is preset so that it always captures the greatest amount of light possible. This means that exposure time can be extremely short even in poorly lit conditions.

That said, the Lytro may be clever, but it is also gimmicky. The resolution is limited to the number of microlenses, each of which is treated as a single pixel by the processing software when an image is extracted. The images on Lytro's website are 525 by 525 pixels, which is fine online, but will not pass muster in print. Still, the new device might just reignite the once-furious race for ever more megapixels. Camera-makers have stopped going on about how many millions of pixels their latest products capture, because it is already more than enough for most amateurs, even on the cheapest models. Nowadays fewer photographic prints are made (and those that are made are typically pretty small), but billions of photographs are shared online each year. Professional photographers may still seek higher pixel counts, but there is little need or desire for such optical oneupmanship in the snapshot market.

For now, therefore, the company is targeting internet photo sharing. It will let owners upload the image data and the processing tools to Facebook and other social networks. The firm has reportedly already managed to raise $50m, so someone clearly thinks there is a market for its innovation. Investors must be hoping that consumers find all the irritants that Lytro's camera removes, like blurred or dim pictures, niggling enough to want them removed once and for all. If they do, their holiday snaps will never have looked better.

More from Babbage

And it’s goodnight from us

Why 10, not 9, is better than 8

For Microsoft, Windows 10 is both the end of the line and a new beginning


Future, imperfect and tense

Deadlines in the future are more likely to be met if they are linked to the mind's slippery notions of the present