What was once thought as impossible, mobile phone cameras are becoming ubiquitous and are seen as the first camera people will use when capturing images “in the moment”. Just like how the megapixel race dominated traditional digital still cameras (DSCs) through the 2000s, the race has now transcended to the world of mobile phones and devices.
For the last decade, phone camera’s pixel count has been increasing steadily. While most smartphones and other cameras on the market today sport 3 or 5 megapixel cameras, some phones on sale in Japan have broken the 16-megapixel barrier. It’s only a matter of time before those image sensors reach the North American shores.
As some may know, having a higher pixel count does not necessarily garner better images. There are enormous tradeoffs and physical limitations that camera design engineers deal with on a daily basis to attempt to deliver the best image quality possible to customers. The race to make mobile phones increasingly smaller and thinner, with complicated curvy designs complicates this effort as well.
An image sensor is a physical device with millions of light-sensitive areas which collect photons and convert the intensity of light to an electrical signal, which is then converted to an image. With image sensors increasing in pixel count and cameras getting smaller, these physical photosites are reducing in size at a rapid rate. As photosites get smaller – they collect less photons which result in a weaker signal on a per pixel basis. The best analogy I can give is one of which there are two buckets of differing radii that are collecting rain drops. In a given time period, the larger bucket will collect more raindrops than the smaller bucket and will then contain more water. In the case of an image sensor, photons are raindrops and pixels are the buckets.
When sensors develop weaker signals from photons, the resulting final images – particularly low light images – are worse due to poor Signal to Noise ratios (SNR). To limit this loss in quality sensor and camera engineers try to maximize a metric called “Quantum Efficiency” – or commonly know in the industry as QE. In short, QE is the ratio of photons hitting a sensor resulting in an electron-hole pair conversion to the total incident photos into a sensor. As pixel area reduces, QE decreases as well, which is something, as mentioned, engineers want to maximize. As technology advances encroach on the physical limitations of silicon and light, sensor engineers are scrambling to develop new technology and processes to stretch the life of silicon as much as possible, and one of these processes is called, “back-side Illumination” or BSI for short.
In a typical “front-side illuminated” (FSI) CMOS active pixel sensor, pixels are built on silicon wafers in a multi-layer process semiconductor process. First, the photodiode is developed in the bulk of the silicon wafer – this is the light sensitive component of the pixel which converts photos to an electrical signal. Now, on top of the photodiode, the transistors which make up the pixel are built up on top of the photodiode which are known as the “metal layers”. On top of the metal layers lie colour filters which only let red, green, or blue light pass to the photodiodes. These colour filters are necessary for software to recreate a colour image through an interpolation algorithm called “demosaicing” which coverts a matrix of colour-filtered light intensity values to Red, Green, and Blue values at each pixel.
The issue with the FSI structure is that light travels through the colour filters and metal layer before reaching the photodiode. Some light is lost through reflection (also known as optical crosstalk) at the metal layer and never reaches the photodiode for conversion into electron-hole pairs. This reduces a sensor’s overall QE and images final image quality – so what’s the difference with back-side illuminated sensors?
With BSI sensors, the semi-conductor process is mostly the same as a FSI sensor – photodiodes are laid out first, and then the metal layer. After the metal layer is laid out, the silicon wafer is flipped over and the excess bulk silicon is ground down until the wafer is only about a dozen microns thick. Grinding the silicon down allows the wafer to only be as thick as it needs to be – about enough to expose the photodiodes. After grinding, the colour filters are laid out as before with FSI illuminated sensors.
With this process, when light passes through the colour filter, it directly passes through to the photo diode. There is no metal layer for optical crosstalk, thus more photos are available for possible electron-hole pair carrier generation which result in a much higher QE. A BSI imager with a given pixel size will theoretically produce better image quality than a similar sized FSI imager – or at least for now.
Overall, the BSI process has extended the usefulness of small-pixel active pixel sensors, but the time will eventually come when a new technology for image sensors will need to be used. Consumers are number driven. They will always want more pixels in their camera, and device manufacturers take advantage of this knowledge to increase marketability of their products even though it may not be in the best interest for the best image quality in the long run.
Leave a Reply