Lytro’s new camera will let you adjust focus point of an image long after taking the shot

This week’s announcement by Mountain View startup Lytro has literally set the photography world on fire. The company was founded by Ren Ng, whose Stanford PhD dissertation is all about the technology needed to create images from light fields—images where the focus point can be adjusted long after the image has been taken. Sounds a lot like holography, but it’s not. It’s called a “plenoptic” camera and it records the ambient light field in a scene: essentially all of the light rays entering the camera’s aperture. A conventional camera uses a lens with a focusing mechanism to sharply focus rays of light coming from one image plane onto a focal plane. In digital cameras, that focal plane is occupied by a CCD or CMOS sensor. Prior to the era of digital photography, film occupied the focal plane.

A plenoptic camera also uses a main lens, but that lens then feeds an array of microlenses located just in front of the image sensor. Each microlens feeds an array of sensor sensels (the imaging equivalent of a pixel). Each sensel under the microlens receives light rays from different parts of the ambient light field. So in effect, the plenoptic camera takes many different images at once across a continuum of focal planes—all captured simultaneously.

For his PhD thesis, Ng built a prototype camera based on a Contax 645 medium-format camera body and a Kodak KAF-16802CE 16Mpixel sensor. He overlaid the sensor with a 296×296 microlens, transforming the conventional Contax camera into a plenoptic camera with an 87-kilopixel image resolution—a 190x decrease in image resolution for the prototype. So there’s one engineering tradeoff: you eliminate focusing issues and get instant focusing by trading off a massive amount of native image resolution but you need to store the entire 16Mpixel image file for the post-processing focus work. Now if you’re a working professional photographer, you’re not going to be too excited about an 87-kilopixel camera. But if you’re used to shooting snaps with your iPhone and posting them to Flickr or elsewhere on the Web, that resolution won’t give you too much heartburn.

There’s a way to get a plenoptic-like image from conventional cameras called “focus stacking,” a technique that’s become popular with photographers who take a lot of macro and close-up images where severe depth-of-field problems throw a lot of each image out of focus. Essentially, focus stacking takes several images of the same object made with slightly different focus settings and then it blends the sharp parts of each image together. You can do this with an image editor such as Photoshop and there are more automated programs available as well. Essentially each image in the stack has a slightly different focus plane.

However, this isn’t exactly what a plenoptic camera does; that would be a gross oversimplification and there’s a of complex math involved. Once you have this super image from the plenoptic sensor assembly, you need to process it to bring out the sharpness wherever you want the focus to be by selecting the right sets of sensel data with the right weightings. Because it’s a post-processing operation, you can make multiple images from one image file, with each image having a different focus point.

Here’s a video to help you visualize what’s going on:

The additional math needed for transforming a raw plenoptic image into a focused one is fascinating, and most of it well beyond me. Suffice it to say that “this sounds like a job for one or more processors.” Which is why EDA360 Insider is covering this announcement. Here is yet another example of where we can go once we decide to be a lot more generous with processors and processing power on board SoCs.

People are famously misquoted in predicting future processing needs. Bill Gates is often misquoted as saying “640K ought to be enough for anybody.” IBM’s Thomas J Watson is often misquoted as saying “I think there is a world market for maybe five computers.” Although these quotes are misattributed, they do reflect “common sense” thinking during a specific epoch. Thinking that rapidly proved obsolete. So you should look upon anyone questioning what we might do with additional on-chip processing power with a very, very jaundiced eye. Plenoptic image processing is yet one more thing we might do with more on-chip processing power.

To see a CNET video interview with Lytro CEO Ren Ng, click here.

If you’d like to play with some plenoptic images, you can refocus to your heart’s content here.


About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at
This entry was posted in EDA360, System Realization and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s