Miscellaneous

Behind The Aperture: All about the Image Sensor: From Photons to bits to photograph

Note: This article is hosted here for archival purposes only. It does not necessarily represent the values of the Iron Warrior or Waterloo Engineering Society in the present day.

In the last issue of The Iron Warrior, we talked about how the human visual system works and how industry attempts to mimic the eye in order to produce digital images. In this week’s article, we talk about one main component in creating a digital image – the image sensor.

In analog times, light sensitive films were used as the transfer medium from the physical world to a 2-dimensional continuous representation. This meant that the majority of photographic film development was performed by chemists rather than engineers. Due to the electronic and materials nature of modern “film,” engineers have taken over its research and development. Today, millions of individual light sensitive pixels are used to convert light to an electrical signal which digitizes them as a matrix of intensity values. These intensity values are then converted to the final colour image through an “Image Signal Processor,” otherwise known as an ISP.

Complimentary Metal-Oxide-Semiconductor (CMOS) sensors are the most common sensors today for consumer digital still cameras and mobile phones. CMOS sensors are typically built on a silicon substrate where, through a series of thin film processes, millions of sub-micron (smaller than one micrometre) electrical components are built to create the actual sensor. The major components built upon the silicon substrate are the array of pixels and the readout circuitry.

The array of pixels filters light into 3 different colours – red, green and blue. For the most common CMOS image sensors, not all pixels receive all three colours that comprise light. Instead, a sensor’s pixel distribution is broken up into the following colour quantities – 25% red, 50% green, and 25% blue. If you remember from last issue’s imaging article, this corresponds closely to the number and sensitivity of long, medium, and short cones in the human visual system.

When light falls on the sensor, it is passes through a colour filter in front of a pixel, only allowing one colour of light through. This light is absorbed in the photodiode of a pixel which creates an electrical signal proportional to the light’s intensity. The sensor will read the generated electrical signal from each pixel in a per-column or per-row basis. At this point, the sensor has what we call “raw” data or a “raw” image. If one was to view this image, it would more or less look like a dull black and white image likely containing many defects which will be fixed in the Image Processing Pipeline within a ISP.

Many image processing operations happen within this ISP which will be discussed in next’s issue imaging column. One of the most important operations within this ISP is converting the “black and white” raw image to a colour image. Using which colours were filtered at each pixel, interpolation algorithms (also known as demosaicing) can be used to create a full colour image. Mathematically, through these interpolation algorithms, we are converting a grayscale M-by-N matrix of intensity values to a M-by-N-by-3 colour image matrix where the 3 represents the individual colour channels of red, green, and blue.

The most basic interpolation algorithm which we will discuss is called “bilinear interpolation.” This algorithm looks at each intensity value of the colours surrounding another pixel and averages them to approximate the other colours that should be at that pixel. Consider the sample colour pattern array image shown with this article. The red pixel R is neighboured by 4 blue and 4 green pixels. Using the average intensity values of these pixels, the colours of green and blue can be interpolated. For example, the red value at pixel R is set to the intensity value found at that pixel. The green value at pixel R will be ((G1+G2+G3+G4)/4), and the blue value found at pixel R will be ((B1+B2+B3+B4)/4). Continuing a similar algorithm for all pixels will generate a colour image. Note that this interpolation method is very basic and does not produce great results, so more sophisticated and proprietary methods are used in most pipelines. Since there are roughly double the amount of green pixels available to the demosaicing algorithm, these are often used as the information for the luminance (brightness) of the image.

Image sensors also come in a variety of physical sizes depending on the requirements for the system for which it will be included. Larger consumer cameras can use a larger sensor due to the amount of space available, but smaller devices like cellular phones require much smaller sensors. The size of the sensor has various effects on the quality and cost of the final product. The size of a sensor and its pixel quantity will determine the surface area of each individual pixel. Pixels with a larger surface area can collect more light than smaller pixels. Think about it like buckets in the rain – if one was to put 10 buckets into a square, they would each collect a certain amount of water. However, if the number of buckets are increased, the radius of each bucket will need to be decreased in order to fit the increased number of buckets and in turn, each bucket will collect less water. The same is true for pixels on an image sensor – the more megapixels a sensor has, the smaller the size of each of its pixels. Reducing the size of each pixel will lower the amount of light each pixel can collect. With less light that a pixel can collect, the more the resulting signal needs to be amplified, which causes increased noise in the resulting image.

Pixel design can also play a large role in the resulting quality of an image. The “quantum efficiency” (QE) of a pixel is ratio between how many photons enter a pixel and how many of those photons generate an electron-hole pair within the sensor. Obviously, a higher QE is preferred to maximize one’s image quality – especially in low light. Over time, QE has been improving steadily through optimizations in manufacturing processes, material science and overall design of the pixels, causing small, high resolution sensors to improve in quality while maintaining the same physical pixel size. One of those major improvements is a “Back Illuminated Sensor” (BSI). See Angelo’s article regarding BSI image sensors in the November 16, 2011 issue of The Iron Warrior to learn more about these kinds of sensors.

Although this article has presented much information, it is only a small piece of the puzzle which comprises an imaging system. In the next issue of The Iron Warrior, we will discuss the Image Signal Processor pipeline and the important role it plays on bringing all of the different components together to produce great images.

Leave a Reply