Yet another answer to “What to do with all of this processing horsepower?” HD HDR (high dynamic range) video

The photography world has been abuzz with HDR (high dynamic range) imagery for a couple of years. There are several lights in the field including Sean McHugh, who’s made a real specialty of shooting beautiful HDR images in and around Cambridge University in the UK. (See www.cambridgeincolor.com.) Still-photo imagery combines three different images (at high, medium, and low exposure) using post processing to achieve a dynamic range much better than what’s possible with conventional CMOS or CCD sensors (or film for that matter). Still HDR photography is very much not a real-time sort of thing. So HDR video has been sort of way out there, until now.

A very recent post on Engadget.com led me to Contrast Optical Design & Engineering, a New Mexico operation that has prototyped (and patented) a high-definition HDR video camera using one conventional camera lens, some simple beam splitters (purchased inexpensively at Edmund Optics), and three HD video sensors (Silicon Imaging SI-1920HD high-end cinema CMOS sensors). Here’s a compelling video of the system in operation.

The prototype camera simultaneously exposes all three HD video sensors with light passing through the solitary Hasselblad medium-format lens. One sensor receives 92% of the incoming light. That’s the “high-exposure (HE) sensor.” The medium-exposure (ME) sensor receives 7.5% of the light and the low-exposure (LE) sensor receives 0.44% of the light. There are 3.5 photographic stops of exposure difference from the HE to the ME sensor and again from the ME to the LE sensor giving 17 stops of dynamic range. Two pellicle beam splitters divide up the light in an ingenious and compact optical path that permits 99.96% of the light to reach the three sensors mounted at right angles to each other. Very little light is wasted and none is wasted by neutral density filters because they aren’t needed to reduce the light to the ME and LE sensors. The simple prototype optical path aligns the three video sensors to within 5 microns with a rotational error of less than 0.1 degrees.

For the EDA360 Insider’s purposes, it’s the algorithm to merge the three simultaneous video streams that’s of real interest, The streams are not combined in real time in the prototype. They probably would need to be in a consumer-grade commercial camera. The novel merging algorithm proposed by the developers of this camera could be called “pick the best of three pixels.” That means that each pixel in the HD field must be evaluated to pick the best pixel from the HE, ME, and LE images. “Best” means well exposed and not saturated. The algorithm involves evaluating adjacent pixels to see if they’re saturated as well.

That’s a lot of pixel evaluation and merging, all possibly happening in real time. This algorithm requires a lot of data movement and many, many pixel evaluations. Now that’s another example of something that might use a lot of processing horsepower and some ingenious interconnect in future SoC designs.

You can read the technical paper on this camera here. It will be presented at Siggraph in August.

For a previous answer to the question of what to do with processing horsepower, see “Lytro’s new camera will let you adjust focus point of an image long after taking the shot.”

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at https://eda360insider.wordpress.com/)
This entry was posted in EDA360, SoC Realization, System Realization and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s