Miscellaneous, Science & Technology

Behind the Aperture: Computational Photography – The Future of Imaging?

Note: This article is hosted here for archival purposes only. It does not necessarily represent the values of the Iron Warrior or Waterloo Engineering Society in the present day.

Last issue, we talked about the Image Signal Processor and how its function integrates in the whole imaging system. This week, we diverge from talking about the imaging system and discuss what technological advances have been made which will lead into the future of imaging.

Since the commercialization of digital cameras over a decade ago, the focus of technological advances has been improving image quality. These technological advances came in the form of higher-resolving sensors, before, more recently, shifting to focus on improving the noise performance of image sensors. Higher resolving sensors were the main focus of the megapixel race that dominated consumer imaging products throughout the last ten years. The question of how many megapixels is enough has been asked continuously, and it seems manufacturers are starting to shift away from any substantial increases in sensor resolution.

So where does imaging go from there? Well, even though pixel counts are starting to reach a steady-state, the advances in processing power and other technology has not. Although imaging in the future may appear to be the same on the surface (shoot and display), how one can arrive to a final image may be vastly different. Thus, the future of imaging lies in not only how you capture an image, but also what can be used with the data the image sensor, and perhaps even other sensors captures.

Those within the imaging world call it computational photography, and it’s a very recent field of research within academia and industry. It relates to the integration of digital computers with digital cameras. Computational photography may be as simple as augmenting an imaging system with other forms of sensors (to be discussed), to processing the data captured from an image (or multiple images) to form another type of image that could not be formed with a traditional image sensor system.

Augmenting imaging systems with other types of sensor already exists today – for example, integration of a MEMS accelerometer to feed in motion data to an image signal processor to implement a form of video or image shake reduction exists today. This is just one example of an implementation of a sensor/imaging system integration and we’ll surely see more pop up in the near future.

On the processing side of computation imaging, one popular form of creating an augmented image from several other images is known today as High Dynamic Range (HDR) imaging. With HDR, usually 3 or more images are taken at varying exposure levels and then blended into a single image. Traditionally, image sensors have a limited dynamic range and cannot capture the full range of bright and dark information within a scene. Taking multiple images at different exposure levels allows the full range of bright and dark information to be captured and blended into a single image. Previously, the capturing and processing of images had to be completed separately on a camera and computer respectively. Since then, the option of created HDR images has been implemented in DSLRs and now more recently – mobile phones.

Computational Photography also helps reduce the amount of hardware needed to capture an image. One example is called “Extended Depth of Field” (EDoF) imaging. It’s essentially “digital auto-focus” where information from the captured image can be specially used to make an image from a fixed-focus camera appear to be all in focus almost purely through software. It’s not a perfect technology, but it allows auto-focus to be removed from a camera which results smaller and cheaper cameras. Today, this technology is not seen as superior to auto-focus but a significant improvement over simple fixed-focus cameras.

One of the most recent example of computational photography is something called “light-field imaging” where instead of just capturing the intensity values of light like a typical image sensor, the vector of light rays is captured as well. This allows an image to be re-focused in software after it’s been captured.

Lastly, at the beginning of this article, I spoke of levelling-off of pixel quantity in the pixel race. Those who are aware of Nokia’s recent announcement of a 41 mega-pixel camera phone this past week may be confused by my statement. Nokia’s “pureview” imaging system can be seen as computational photography as well. Although the pixel count of the image sensor is very high, it’s not about the quantity, it’s about how the pixels are used. The pixel size within Nokia’s new sensor is comparable to current 8 mp sensors within mobile phones. The novelty with Nokia’s high pixel count is that they are used to implement lossless zoom. Packing in zoom optics into a reasonable size acceptable to consumers within a mobile device is a limiting factor and a reason why optical zoom has not taken off in mobile devices. Instead of a low resolution images sensor cropping to a lower, usually useless resolution, Nokia’s camera crops to lower, but still high resolution image to implement zoom.

The imaging world is changing at a vastly accelerated pace and we hope we’ve given you an idea of what has been done on the computation side to date, to give you an example of what the future may hold. Although capturing an image will still be the ultimate goal of photography, how and what we use to reach that goal may be vastly different.

Leave a Reply