Every electronic camera contains an image sensor which converts the optical images to electronic signals and a lens which focuses the object to the image sensor.
Figure 1: Schematic of typical camera module.
There are two types of image sensors
- CCD (Charged Coupled Device)
- CMOS (Complementary Metal Oxide Semiconductor)
Initially, charge-coupled devices (CCDs) were the only image sensors used in digital cameras. They had already been well developed through their use in astronomical telescopes, scanners, and video camcorders. However, there is now a well established alternative, the CMOS image sensor. Both CCD and CMOS image sensors capture light using a grid of small photosites, often referred as pixels, to convert the light into Red, Green and Blue groups of digital information. It's how they process the image and how they are manufactured where they differ from one another.
A CCD and a CMOS sensor may look the same, but their operational methods are very different.
A charge-coupled device (CCD) gets its name from the way the charges on its pixels are read after an exposure. The CCD gathers the light during the exposure into a grid of Red, Green, Blue photodiodes. The built up charge resulting from the exposure is sent to the register buffer within the chip and onto the voltage converter, then into the amplifier circuit, followed by the analog to digital converter and finally to the camera’s main processor for further color and sharpening work as needed. Once a row has been read, its charges in the readout register row are deleted, the next row enters, and all of the rows above march down one row. With each row "coupled" to the row above in this way, each row of pixels is read—one row at a time.
The CMOS sensor has the same grid of photosites, but each site has connected circuitry for converting light into digital electron signals within itself, before the signal is sent to the main camera processor. This CMOS is able to boost the signal value at the site (pixel), thus reducing noise (static) in the image. The CMOS uses less power, has a clean digital signal, and is slightly less costly; therefore utilized mainly for larger sensor camera models.
Figure 2: Microlenses are used to focus the incident light on each pixel on to the light-sensitive area, increasing the low light sensitivity of the image sensor
Microlenses are small lenses, generally with diameters less than a millimetre (mm) and often as small as 10 micrometers (µm). A typical microlens may be a single element with one plane surface and one spherical convex surface to refract the light. Because microlenses are so small, the substrate that supports them is usually thicker than the lens.
Combinations of microlens arrays have been designed that have novel imaging properties, such as the ability to form an image at unit magnification and not inverted as is the case with conventional lenses. Microlens arrays have been developed to form compact imaging devices for applications such as image sensor.
Microlenses and a metal opaque layer above the silicon funnel light to the photo-sensitive portion of each pixel. On their way, the photons of light pass through a color filter array (CFA) where the process begins of obtaining color from the inherently “monochrome” chip.
Figure 3: Microlenses array used to focus the incident light on each pixel
A Color Filter Array (CFA), or Color Filter Mosaic (CFM), is a mosaic of tiny color filters placed over the pixel sensors of an image sensor to capture color information. Color filters are needed because the typical photosensors detect light intensity with little or no wavelength specificity, and therefore cannot separate color information since sensors are made of semiconductors they obey solid-state physics.
Figure 4: Bayer Filter Array
A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. It is named after its inventor, Bryce E. Bayer of Eastman Kodak. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a color image.
A demosaicing algorithm is a digital image process used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA). It is also known as CFA interpolation or color reconstruction. The image which is to be demosaiced is called as raw image. Most modern digital cameras acquire images using a single image sensor overlaid with a CFA, so demosaicing is part of the processing pipeline required to render these images into a viewable format. Lot of algorithms is available for raw image demosaicing among them nearest-neighbor interpolation, bilinear interpolation, bicubic interpolation, spline interpolation, and Lanczos resampling are most commonly used.
I Like to add one more important thing here, The Image Sensor Market is expected to be around US$ 28.0 billion by 2025 at a CAGR of 8.5% from 2020 to 2025.
ReplyDeleteHello Admin, I enjoyed reading your article. Looking forward to more valuable contributions from you in the future! Thank you for sharing with everyone.
ReplyDeleteRed Sensors, 3D Lidar Sensors