Pages

Search Here

Image Sensor



Every electronic camera contains an image sensor which converts the optical images to electronic signals and a lens which focuses the object to the image sensor.






Figure 1: Schematic of typical camera module.




There are two types of image sensors


  • CCD (Charged Coupled Device)
  • CMOS (Complementary Metal Oxide Semiconductor)           


Initially, charge-coupled devices (CCDs) were the only image sensors used in digital cameras. They had already been well developed through their use in astronomical telescopes, scanners, and video camcorders. However, there is now a well established alternative, the CMOS image sensor. Both CCD and CMOS image sensors capture light using a grid of small photosites, often referred as pixels, to convert the light into Red, Green and Blue groups of digital information. It's how they process the image and how they are manufactured where they differ from one another. 


A CCD and a CMOS sensor may look the same, but their operational methods are very different.  


A charge-coupled device (CCD) gets its name from the way the charges on its pixels are read after an exposure. The CCD gathers the light during the exposure into a grid of Red, Green, Blue photodiodes.  The built up charge resulting from the exposure is sent to the register buffer within the chip and onto the voltage converter, then into the amplifier circuit, followed by the analog to digital converter and finally to the camera’s main processor for further color and sharpening work as needed. Once a row has been read, its charges in the readout register row are deleted, the next row enters, and all of the rows above march down one row. With each row "coupled" to the row above in this way, each row of pixels is read—one row at a time.


 The charge-coupled devices numerous steps demands high power consumption.  The multiple steps listed also allows for the introduction of digital noise, which requires increased and various solutions to correct and reduce it.


The CMOS sensor has the same grid of photosites, but each site has connected circuitry for converting light into digital electron signals within itself, before the signal is sent to the main camera processor.  This CMOS is able to boost the signal value at the site (pixel), thus reducing noise (static) in the image. The CMOS uses less power, has a clean digital signal, and is slightly less costly; therefore utilized mainly for larger sensor camera models.


 The cost of fabricating a CMOS wafer is significantly less than the cost of fabricating a similar wafer using the specialized CCD process. Costs are lowered even farther because CMOS image sensors can have processing circuits created on the same chip. With CCDs, these processing circuits must be on separate chips.


           Digital images are formed from tiny dots or squares of color called as Pixel. Pixel is a discrete photosensitive cell that collects and holds photo charge. The dots, usually many millions per image, are so small and close together they blend into the smooth continuous tones. Each pixel is a sample of an original image, where more samples typically provide more-accurate representations of the original.


 The intensity of each pixel is variable; in color systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black. These images are captured directly with digital cameras the end result is an image in a universally recognized format that can be easily manipulated, distributed, and used.


 The word pixel is based on a contraction of pix ("picture") and el (for "element"). It has been said earlier that image sensor is a grid of photosites but if we look closer we can find color filter and micro lens above each pixel.


 Image sensor contains a light sensitive area covered with Color Filter Array arranged according to Bayer’s Pattern above which micro lenses are placed to focus the incident light into the light sensitive are.












Figure 2: Microlenses are used to focus the incident light on each pixel on to the light-sensitive area, increasing the low light sensitivity of the image sensor




Microlenses are small lenses, generally with diameters less than a millimetre (mm) and often as small as 10 micrometers (µm). A typical microlens may be a single element with one plane surface and one spherical convex surface to refract the light. Because microlenses are so small, the substrate that supports them is usually thicker than the lens.


Combinations of microlens arrays have been designed that have novel imaging properties, such as the ability to form an image at unit magnification and not inverted as is the case with conventional lenses. Microlens arrays have been developed to form compact imaging devices for applications such as image sensor.



        Microlenses and a metal opaque layer above the silicon funnel light to the photo-sensitive portion of each pixel. On their way, the photons of light pass through a color filter array (CFA) where the process begins of obtaining color from the inherently “monochrome” chip.







Figure 3: Microlenses array used to focus the incident light on each pixel


A Color Filter Array (CFA), or Color Filter Mosaic (CFM), is a mosaic of tiny color filters placed over the pixel sensors of an image sensor to capture color information. Color filters are needed because the typical photosensors detect light intensity with little or no wavelength specificity, and therefore cannot separate color information since sensors are made of semiconductors they obey solid-state physics.


         The color filters filter the light by wavelength range, such that the separate filtered intensities include information about the color of light. There are many Color Filter Arrays the basic and first available one is the Bayer Filter. Others are RGBE Filter, CYYM Filter, CYGM Filter, RGBW Filter and RGBW Filter with Panchromatic Cells.











Figure 4: Bayer Filter Array




A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. It is named after its inventor, Bryce E. Bayer of Eastman Kodak. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a color image. 


A demosaicing algorithm is a digital image process used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA). It is also known as CFA interpolation or color reconstruction. The image which is to be demosaiced is called as raw image. Most modern digital cameras acquire images using a single image sensor overlaid with a CFA, so demosaicing is part of the processing pipeline required to render these images into a viewable format. Lot of algorithms is available for raw image demosaicing among them nearest-neighbor interpolation, bilinear interpolation, bicubic interpolation, spline interpolation, and Lanczos resampling are most commonly used.




2 comments:

  1. I Like to add one more important thing here, The Image Sensor Market is expected to be around US$ 28.0 billion by 2025 at a CAGR of 8.5% from 2020 to 2025.

    ReplyDelete
  2. Hello Admin, I enjoyed reading your article. Looking forward to more valuable contributions from you in the future! Thank you for sharing with everyone.
    Red Sensors, 3D Lidar Sensors

    ReplyDelete