Science & Technology

What the Actual Fridge? – Digital Image Capture Technology

Note: This article is hosted here for archival purposes only. It does not necessarily represent the values of the Iron Warrior or Waterloo Engineering Society in the present day.

With advent of the digital age, the ability to produce and distribute quality content has improved drastically. One large factor in this is the development of digital image capture and processing. These days, actors and other crew do not need to wait until the film is developed to see if their performance was adequate or not. Re-takes have becomes virtually free due to the lack of any physical recording medium. Furthermore, user generated content has improved significantly in quality. A low budget film has access to tools that weren’t available to the best directors ten-fifteen years ago. Consequently, movies have become cheaper to make. One example of this is the horror movie “Paranormal Activity” which was made on a budget slightly more than the average engineering co-op pay for four months (which went on to make over $100 million in the box office). Still photography has also enjoyed a massive growth of hobbyist photographers that sometimes produce content comparable to the best photographers (although very often, the dude with that Canon Rebel that doesn’t know what DSLR stands for can be fooled into falsely believing in the superiority of his own talent).  The vast amount of entertainment we have to choose from is immensely gratifying and is often blamed for the rather short attention span that audiences currently possess. For all this, we have to thank the ability to digitally capture and store images (both stationary and moving).

Digital capture of images started with rather humble beginnings at AT&T Bell labs in 1969 (the same year man reached the moon). Charged Couple Devices (CCDs) which form the older generation and better performing image capture implementations were initially invented as a memory storage device. This idea went on to be first used for image capturing two years later and the inventors went on to receive a Nobel Prize in Physics in 2009. Without getting into the details, CCDs provide a good quality image with low noise due to their high quantum efficiency (ability to convert photons to electronic signals). However, these CCD sensors are more expensive to manufacture due to their relatively custom fabrication process and have problems transmitting high quality (low noise) images rapidly because the data is transmitted off of the chip in the form of analog signals.

This is where the presently ubiquitous CMOS imager comes into play. The University of Waterloo has made a large impact in the development of these. In 1969, the University of Waterloo professor Dr. Savvas Chamberlain proposed the idea of integrating an amplifier and a photodetector for every pixel of an image (thus coined an Active Pixel Sensor because it consumes power to amplify the signal). For several years, this proved challenging to implement due to the large variance present in the metal oxide semiconductor (MOS) production technology. In the 1980s, the advent of the microprocessor led to a boom and subsequent standardization of MOS technology in the form of CMOS (complementary metal oxide semiconductor). With the establishment of CMOS as a well-controlled and stable process, the active pixel sensors built in the CMOS process flow became more reliable, viable and economical. Since then, CMOS has dominated the market of consumer grade image capture. Initial problems with CMOS image sensors included a large amount of electronic noise. CMOS technology has come a long way from then.

In addition to these fundamental inventions of digital sensors, there has been a lot of secondary development. With the onset of cell phone cameras, there has been a huge impetus to shrink cameras even further. Initially, cameras had tiny electromagnetic actuators to move the lens to bring the subject into focus. However, these components caused electromagnetic interference with other parts of the camera. Also, power and heat generation became important issues to address. For this purpose, the next generation of camera modules used MicroElectroMechanical Systems (MEMS) based systems to perform this function. However, with the advent of large megapixel count sensors on camera phones, there was a need for larger optics. However, in order to make a balanced compromise between performance and size, Extended Depth of Field (EDoF) was invented. EDoF cameras use an asymmetric lens to focus objects from varying distances onto the same sensor. Although this results in a jumbled bunch of light on the sensors, smart electronics and algorithms undo this into a well resolved picture ultimately.

I am still amazed by all this development in a camera module. The future of camera technology might lie in something like the light field camera by Lytro. I hope this was informative and the next time you make a Vine, think about all the years of technology behind it.

Leave a Reply