Encyclopedia … combined with a great Buyer's Guide!

Image Sensors

Definition: optoelectronic sensors which can be used for imaging

Alternative terms: imaging sensor, imager

German: Bildsensoren

Categories: photonic devices, optoelectronics, vision, displays and imaging

Author:

Cite the article using its DOI: https://doi.org/10.61835/83w

Get citation code: Endnote (RIS) BibTex plain textHTML

Summary: This in-depth article explains

  • what types of image sensors exist and how they work,
  • linear image sensors and two-dimensional image sensors of various types, e.g. based on photodiodes, CMOS, CCD etc.,
  • how color imaging can be achieved,
  • how intensified sensors, photon counting sensors and sensors for special wavelength regions work,
  • what are the important parameters of image sensors, such as light sensitivity, fill factor, quantum efficiency, sensor formats and resolution, dynamic range, noise, linearity, cross-talk, pixel defects, readout time and frame rate, etc., and
  • what compatibility issues with objectives may arise.

Image sensors are optoelectronic sensors which can measure light intensities in a spatially resolved manner for imaging applications. They are used in various kinds of cameras and for scanners, for example

  • in digital photo cameras,
  • in video cameras (for television, consumer devices, surveillance, industry, etc.),
  • for thermal imaging (thermography), and
  • for various kinds of scanners such as document scanners.

Some image sensors generate only one-dimensional images, but by combining multiple such images with consistent transverse spacings, one can assemble two-dimensional images. For example, that is often done in document scanners. Other sensors directly produce two-dimensional images.

Image sensors are also called focal plane arrays (FPAs), indicating that they are detector areas which are placed in the focal plane of an imaging system.

Linear Image Sensors

Photodiode Arrays

If only a relatively small number of pixels is required, a photodiode array can be used. It contains one photodiode per pixel, and all those diodes can be addressed with separate wires. Usually, one has a line sensor, with all pixels arranged in one row.

This simple approach, however, is no more practical for line sensors with thousands of pixels because the number of wire connections would be impractical. Even if suitable connectors could be made, it would be inconvenient for further processing of the data e.g. with a microprocessor.

Photodiode arrays are available with different kinds of photodiodes to be used for specific spectral regions. For example, there are silicon arrays for use with visible or near-infrared radiation, whereas with indium gallium arsenide devices one gets further into the infrared.

Line Sensors with Sequential Readout

For line sensors (as used in line scan cameras) with larger numbers of pixels, some fundamental operation principles need to be changed. Instead of a parallel readout of signals for all the pixels, one needs to use some method for realizing a serial readout: signal intensities related to different pixels are transmitted subsequently, i.e., at slightly different times.

It would not be most practical to realize such a technique for photocurrents – providing an output current which at some time corresponds to the photocurrent of a particular photodiode. The same holds for concepts based on electric voltages. Instead, the common method is to work with electric charges instead of currents; such charges are accumulated within a certain exposure time (which may of course be adjusted to the measurement conditions).

Electronic image sensors are usually realized in the form of optoelectronic semiconductor chips, where the required structures for all pixels are fabricated in parallel. The used structures for the light-sensitive elements can differ substantially between different sensors, but usually they are based on the fundamental principle of having some kind of photodetector which charges a capacitor during exposure. Before an exposure begins, the capacitor is charged to reach some fixed bias voltage. After the exposure, the capacitor will have acquired some amount of charge (or change of charge), which reflects the total amount of light received during the exposure.

Eventually, the charge needs to be converted into a voltage signal. Different approaches are used in image sensors for that conversion; the CMOS and CCD sensor concepts are described in the following sections. They are all based on silicon technology, providing light sensitivity through the whole visible range and somewhat into the infrared.

Linear CMOS Sensors

CMOS means complementary metal-–oxide-–semiconductor – a technology developed for making integrated electronic circuits such as microprocessors. Essentially the same technology is applied for CMOS image sensors. The photodetector can either be a photodiode or a photo gate.

In the early days of CMOS sensors, passive pixel sensors (PPS) were used, where each photodetector only had a single MOS transistor, with which one can let the charge flow via a bus wire to a charge amplifier (only one for the whole sensor), e.g. a highly linear capacitive transimpedance amplifier. This concept could be realized with relatively simple chip designs with a good fill factor, i.e., the light-sensitive device covered much of the area per pixel.

In a modern CMOS sensor with active pixels (APS = active pixel sensor), there is one charge amplifier associated with each photodetector, so that a substantially better signal-to-noise ratio and higher speed is achieved, but with less precise linearity. Additional transistors can be used for functions like realizing exposure control with a global shutter, noise suppression etc. These electronics deliver a voltage value reflecting the received charge, which can be directed to a bus wire with one of the transistors. Even the analog-to-digital conversion may be done on the pixel level, resulting in a digital pixel sensor with five or more transistors per photodetector and no loss of signal quality in further processing.

Although in practice one usually reads data for all the pixels sequentially, a CMOS sensor would also allow one to address the pixels in arbitrary order, similar to the addressing of bytes in a random access memory (RAM). For example, one may in certain situations read out only some range of pixels, or use only every second pixel for quickly acquiring some limited amount of information.

A major advantage of the CMOS sensor technology is that it can be easily integrated with additional analog or digital circuits on a CMOS chip.

Linear CCD Sensors

CCD sensors are based on the principle of charge-coupled devices, which were originally developed for purely electronic applications, but have been found to be most useful for imaging. While the light-sensitive part can be of the same kind as in a CMOS sensor, the readout method is completely different. We first consider the simpler situation of a linear CCD sensor array and treat two-dimensional CCD sensors in a later section.

A common type of implementation involves a transfer gate, which is another array structure placed parallel to the MOS sensor pixels; it is itself made light-insensitive by some shielding and acts as an analog shift register. After exposure, one first shifts the charges of the photodetectors into the transfer gate. Thereafter, one then sequentially reads out the signals from there based on the principle of the shift register. In each step, one transfers the charge from each cell of the shift register to the neighborhood one – except for the last one, where the signal is read out with a charge amplifier (normally on a separate analog chip), producing a voltage signal. In a first step after exposure, the output will reflect the amount of light received by one of the detectors; in further shifting cycles, one subsequently obtains the signals for all the other detectors. During the shifting procedure, the photodetectors may do the exposure for the next image frame.

The time-dependent voltage signal is then converted to a digital signal in an analog-to-digital converter on the same chip. Note that one requires only a single charge amplifier and analog-to-digital converter, which not only saves chip space, but also eliminates the problem of performance deviations between different pixels and reduces the frequency of pixel defects. The photodetectors themselves, having fairly simple structures, are more easily fabricated with homogeneous properties, compared with more complex multi-transistor CMOS designs.

The shift register for the charges is easy to implement with some arrangement of electrodes. Typically, it has three cells per detector pixel. Directly after transfer of the charges into the transfer gate, only every third cell contains a charge, held in a potential well created with a corresponding electrode. The potential wells can now be shifted by changing all the electrode voltages, such that each charge flows into the neighbored cell, while avoiding any mixing of charges. There are other detailed realizations of the shift register principle, but the basic principle is always as explained above.

It is also possible to combine the functions of photodetection and shift register, but then one requires an external shutter for prohibiting further illumination during the shifting operation – except perhaps if the shifting can be done much faster than the image exposure.

Usually, CCD sensor chips are complemented with additional chips for providing the required clock signal, A/D conversion, further signal processing, etc.

The importance of CCD sensor technology is underline by the Nobel Prize in Physics 2009, which in half was given to Williard S. Boyle and George E. Smith for their invention of the principle of charge-coupled devices.

Two-dimensional CMOS and CCD Image Sensors

For two-dimensional image sensors, which can easily have many thousands or even tens of millions of pixels, it would obviously not be practical to use one wire per detector pixel; the method of sequential readout, as explained above for linear detector arrays, is needed, just in a somewhat adapted form.

CMOS

Two-dimensional CMOS image sensors allow one to randomly address each pixel via its row and column number. (The number of rows or columns is often too large for addressing them with the same number of external wire connections; one needs to use a binary address code transmitted over few wires as the input of some row or column demultiplexer.) In active pixel sensors, an analog voltage signal of the address pixel is sent to the bus without significant loss of signal quality. Digital pixel sensors transmit digital data instead, eliminating any loss of signal quality.

The exposure periods for the image rows are often staggered in the case of CMOS sensors; one has a rolling or scrolling shutter. However, it is also possible to realize a global shutter, which is better for use with moving objects, although it can reduce the available exposure time, e.g. in video cameras.

CCD

For CCD sensors, one can use an additional shift register for multiplexing the signals from different image columns. For each image row, one uses the vertical shift registers to feed the horizontal shift register with one point for each column and then shifts those values to produce the output signal. Each further vertical shift provides data for another row. The order in which that image data for the pixels are obtained is therefore hard-wired and cannot be changed.

There are actually different architectures of CCD sensors, e.g. interline transfer sensors, frame transfer sensors, full frame sensors and others, where the details of the multiplexing technique differ.

Comparison of CMOS and CCD

Due to the substantial technological developments in the areas of both CMOS and CCD sensor chips, their relative merits have changed with time and can depend substantially on what detailed devices are chosen. For example, while CMOS sensors were originally known to be less sensitive and offering lower image quality, there are now CMOS sensors which offer quite good image quality and quite similar fill factors and sensitivity. Some general differences can nevertheless be recognized:

  • CMOS sensors can be more easily integrated with additional microelectronics on the same chip, providing functionality like dark current compensation and other signal processing. For example, there are devices with a logarithmic response for covering very large dynamic ranges (sometimes >60 dB). Even single-chip digital camera sensors are possible; this allows the realization of extremely compact cameras.
  • CMOS cameras are generally cheaper to fabricate, particularly because less additional electronics are required.
  • CMOS technology requires only a single operation voltage (e.g. 2.5 V, 3.3 V or 5 V, while CCD chips normally require higher voltages and also significantly higher electrical power (although some lower-voltage devices have also been developed).
  • CMOS chips offer substantially faster readout.
  • The fixed pattern noise of CMOS sensors, resulting from deviations between the electronic parts for different pixels, is still tentatively higher than for CCDs. Also, pixel defects are more frequent.

Charge Injection Devices

A variant of CCD sensors are charge injection devices (CID). They are fabricated with the same MOS technology and also use capacitors which are discharged through illumination. The difference to CCD sensors is essentially the read-out method: the charges for the different pixels are directly read out through a bus signal, rather than sequentially coupling them to neighbored pixels. This substantially reduces cross-talk between pixels, e.g. blooming effects at high light intensity levels. Also, this approach enables random access to the pixels, i.e., it does not enforce sequential readout. Otherwise, the performance figures are similar.

CIDs are not as widely used as CCDs, but can be a favorable option for special applications, often with specially adapted designs. For example, there are devices with rather large pixel charge capacities, optimized for detection with a wide dynamic range and possibly offering quantum-limited noise. Also, there are image sensors with improved radiation tolerance.

Color Imaging

Monochrome cameras can simply use a single photodetector per pixel. For color images, several more sophisticated techniques have been developed:

  • One can use dichroic beam splitters for directing the red, green and blue components of light to three separate detector chips. Such three-CCD cameras provide color images at the full resolution and good color separation, also with optimum quantum efficiency, but at a substantial cost and with a less compact setup. That principle is used for some industrial cameras and professional video cameras, but usually not for consumer photo cameras.
  • One could use three different photodetectors, equipped with different color filters, for each pixel on a single chip. The substantial increase of the number of detectors is problematic, however; because the detector size cannot be arbitrarily reduced, or the chip size increased, one may get a reduced total number of pixels of the image sensor.
  • A better resolution is possible with a special pattern of color filters, e.g. in the form of the common Bayer filter (name after its inventor Bryce Bayer), containing red, blue and twice as many green parts. The actual color for each pixel is then obtained with an interpolation procedure with a demosaicing algorithm. One obtains one pixel per photodetector, but of course with some significant loss of resolution and color fidelity compared with a three-CCD device. This technique is used in most photo cameras and video cameras, also in scanners.

Intensified Sensors

There are image sensors which are combined with an image intensifier based on a microchannel plate detector (a kind of photomultiplier) in front of the CCD or CMOS chip. This allows the operation of such intensified sensors (e.g. ICCD = intensified CCD) under very low light level conditions. However, the quantum efficiency will normally be lower, and the image noise is increased compared with operation of an ordinary sensor at higher light levels.

Photon Counting Sensors

For imaging of extremely low light levels, one may also use single-photon avalanche photodiodes, used in Geiger mode. They can now be made even in large silicon-based CMOS detector arrays. For example, they are suitable for single-photon 3D imaging via time-of flight measurements.

Sensors for Other Spectral Regions

Although the technology of CCD and CMOS sensor chips has been driven to a very high level within several decades, it is essentially limited to silicon. Therefore, they are light-sensitive only for wavelengths roughly below 1 μm. Most devices are used with visible light, some also for the near infrared or for the ultraviolet region.

For infrared imaging at longer wavelengths, one requires different technologies:

  • There are modified kinds of CMOS detectors, where the photodetection is done based on indium gallium arsenide (InGaAs), while the electronic processing is done with traditional silicon-based CMOS technology. Unfortunately, the integration of different semiconductor technologies is difficult, resulting in high cost and a performance which is much reduced e.g. in terms of spatial resolution.
  • For still longer wavelengths, there are sensors based on micro-bolometers, which register slight heating of tiny parts caused by absorption of radiation. Such sensors are used for thermal imaging cameras. They are quite limited in resolution, sensitivity and speed, and are fairly expensive.

Important Parameters of Image Sensors and Their Optimization

Light Sensitivity, Fill Factor and Quantum Efficiency

It is often desirable to achieve sufficient signal strength with a limited amount of light in order to limit the necessary exposure time. Therefore, one tries to obtain a high quantum efficiency of the detection.

The light-sensitive parts of CMOS or CCD chips may have a quite high quantum efficiency, often around 80 or even 90% over the visible spectral range. However, some of the light is often lost because the light-sensitive parts do not cover the full pixel area. That problem of a limited fill factor can be reduced either by minimizing the size of light-insensitive parts or by properly directing the incident light to the sensitive regions, e.g. using microlens arrays. The latter approach, however, can have detrimental side effects, such as an increased directionality of the sensitivity (the relevance of which depends on the used optical camera design) and smear effects due to optical cross-talk between different pixels. Certain wedge structures have been developed which are better in that respect.

Another approach is back side illumination through a substrate of reduced thickness. That principle has been applied successfully both to CCD and CMOS sensors.

CMOS sensors are no more necessarily worse in terms of sensitivity than CCD sensors, despite a tentatively larger amount of chip area used for non-light-sensitive parts.

Note that the term sensitivity is often erroneously used instead of responsivity. The sensitivity also depends on image noise, which can have different origins:

  • Shot noise related to photon statistics can play a role in sensitive applications. If a detector collects a certain number of carriers within the measurement time on average, there will be an uncertainty (standard deviation) which is the square root of that number.
  • Thermal noise may not only cause a dark current (for operation with some bias voltage), but also affects the charge measurement: when the capacitor is discharged at the beginning of the measurement period, it will not be perfectly discharged, but rather hold some thermal energy, which causes thermal noise in the measurement result – unless the initial voltage is measured as well and subtracted from the result (which is sometimes done).
  • The charge amplifier may add some further noise, which is partially also thermal noise.
  • There can be systematic deviations between different pixels due to microscopic parameter variation; such fixed pattern noise may be eliminated after each measurement with software.

For highest sensitivities, e.g. in astronomy, image sensors often have to be cooled in order to reduce thermal noise. With proper optimization of the whole system, photon noise limited performance can be achieved.

Sensor Formats

Image sensors are available with a wide range of formats. Sensors for miniature cameras as used in smart phones are only a few millimeters wide, while an SLR photo camera typical has a sensor with a width of the order of 30 mm. Frequently, the sensors are significantly smaller than the full format size of 36 mm × 24 mm (where the crop factor indicates the reduction in diagonal size), but there are also full-size sensors and even sensors in substantially larger sizes.

The ratio of width to height is often 4:3 or 16:9 corresponding to frequently used image formats. However, other formats like 1:1 and 2:1 are also available for special purpose cameras.

Spatial Resolution and Pixel Pitch

The resolution of an image sensor is simply specified by the number of pixels in the horizontal and vertical direction – for example, 1024 × 768 or 1600 × 1200.

The pixel spacing (pixel pitch) in CMOS or CCD sensors is typically somewhere between 2 μm and 30 μm. For example, if a consumer-type photo camera contains an image sensor with 3000 × 2000 pixels, which is 24 mm wide, the pixel spacing is 24 mm / 3000 = 8 μm. (The height and the width of the pixels should normally be identical.) The pixel size can be somewhat smaller than the pixel pitch; not the whole chip area is active area.

Obviously, the pixel spacing should be small enough to exploit the full resolution potential of the optical part, while on the other hand it does not make sense to make it significantly finer, since that would not only increase the fabrication cost but also unnecessarily increase the amount of data to be handled and possibly also reduce the fill factor and thus the efficiency.

Dynamic Range, Linearity, Overflow Effects

Image sensors which have an integrated analog-to-digital converter (e.g. CMOS sensors), have a limitation of the dynamic range according to the number of bits. For example, a 14-bit sensor can deliver 214 = 16,384 different intensity values, corresponding to a dynamic range of 42 dB. The actual dynamic range may be smaller, if the lowest bits are meaningless. For sensor chips with analog output (CCD), the dynamic range is limited by noise.

Depending on the details of the electronics, CCD or CMOS chips can be highly linear within a certain range of light intensities, or exhibit substantial nonlinearities. The type and quality of the used charge amplifier can be important for that aspect.

For excessive illumination beyond the full well capacity of a pixel, there can be blooming effects by overflow of carriers to neighbored pixels.

Cross-talk

Cross-talk means that light hitting one pixel also produces some response on our pixels. This may happen in the form of optical cross-talk, e.g. by scattering of light at microlenses. Also, cross-talk can occur in the electronics, particularly at high light levels.

Pixel Defects

Particularly for CMOS sensors, but also for CCD sensors it can happen that certain pixels are defect, e.g. always delivering maximum signal even with no incident light, or always zero signal. That may not always be immediately notice, but even consumer cameras should of course not exhibit a substantial number of dead pixels.

Readout Time and Frame Rate

The time for readout of a complete image frame can be substantial, particularly for a high-resolution CCD sensor with many millions of pixels. That limits the possible frame rate of a video camera, for example. Therefore, the multi-tap technique has been developed for CCD sensors, where different parts of the image are transmitted in parallel through two, four or even more outputs. However, this can lead to problems because one then requires multiple charge amplifiers and A/D converters, which may somewhat deviate in performance parameters, producing image artifacts.

CMOS sensors are generally faster, and there are versions for several thousands images per second.

Compatibility with Objectives

For a photo camera, for example, it is important that the used image sensor fits well to the used photographic objective. For example, objectives are optimized for a certain image sensor format. Also, the incidence angle of light on the sensor can depend on the objective, and some sensors (e.g. with microlenses) may not work well with larger incidence angles; they should be used in conjunction with telecentric lenses.

Suppliers

The RP Photonics Buyer's Guide contains 19 suppliers for image sensors. Among them:

See also: cameras, photo cameras, imaging, photodiode arrays, focal plane arrays

Questions and Comments from Users

Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest.

Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail.

Spam check:

By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.

preview

Share this with your friends and colleagues, e.g. via social media:

These sharing buttons are implemented in a privacy-friendly way!