Technology Electronic Imaging Detectors Engineering Essay

Published:

An imaging detector is key component in tomography because it determines, the relevant structures resolved and at what level specimen fluorescence may be detected with the use of energy waves through sectioning in the sensor or photosensor. These devices are generally divided into two categories: tube-type and solid state detectors. The vidicon tube camera (Figure 2.1) is an example of a tube-type detector.

Figure 2.1 Vidicon Camera Tube

Electron beams make their way through photosensitive surface where it stores charge. This method has now been overshadowed by the modern solid-state detectors

Solid-state detectors consist of a dense matrix of photodiodes incorporating charge storage regions. There are several variations of this basic concept which include charged-coupled device (CCD), the charged-injected device (CID) and the complementary-metal-oxide-semiconductor (CMOS). All these detectors contain silicon diode photosensor also know as pixel which is coupled with a charge storage region which is then connected to an amplifier that reads out the accumulated charge. Both CMOS and CCD chips sense light through similar mechanisms, by taking advantage of the photoelectric effect, which occurs when photons interact with crystallized silicon to promote electrons from the valence band into the conduction band.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The photodiode, often referred to as a pixel, is the key element of a digital image sensor. Sensitivity is determined by a combination of the maximum charge that can be accumulated by the photodiode, coupled to the conversion efficiency of incident photons to electrons and the ability of the device to accumulate the charge in a confined region without leakage or spillover

CMOS and CCD integrated circuits are inherently monochromatic (black and white) devices, responding only to the total number of electrons accumulated in the photodiodes, not to the color of light giving rise to their release from the silicon substrate. Color is detected either by passing the incident light through a sequential series of red, green, and blue filters, or with miniature transparent polymeric thin-film filters that are deposited in a mosaic pattern over the pixel array.

Charged-Couple device (CCD) and complementary Metal Oxide Semiconductor (CMOS) are currently the two main technologies in the digital imaging market, both of which have their own streangths and weaknesses. While there are many technical arguments between them, the technologies and market of CCD and CMOS continues to mature.

Charged-Couple Device (CCD)

History

On the 17th October 1969 the basic structure, principles of operation and defined physics of fabrication was first drawn up by George smith and Willard Boyle and in the 1970s Bell laboratory built the fist solid-state video incorporating CCD. These two events have now had a dramatic influence on the development and evolution of the CCD imaging sensors and technologies and how they have their way into majority electronic imaging acquisition systems. Astronomy would never be the same for the CCD was about to revolutionize astronomical instrumentation much as film did nearly 100 years ago. Within a few years the CCD became the sensor of choice at all major observatories. [3] Astronomical CCD Observing and Reduction Techniques, ASP Confrence Series, Vol. 23, 1992, Steve B.Howell,ed.

Anatomy

Charge-Coupled Devices (CCDs) are silicon-based integrated circuits consisting of a dense matrix of photodiodes that operate by converting light energy in the form of photons into an electronic charge. [4] Electrons are generated from the interaction of the silicon atoms and the colliding photons, these electrons are then able to be stored in a potential wall and then transferred across the chip using registers and then taken out to an amplifier. Below is schematic diagram (Figure 2.2) of the CCD showing the various components and the interaction describe above.

Figure 2.2 Anatomy of a Charged Coupled Device (CCD)

The main architectural feature of a CCD is a vast array of serial shift registers constructed with a vertically stacked conductive layer of doped polysilicon separated from a silicon semiconductor substrate by an insulating thin film of silicon dioxide (Figure 2.2). [4] CCD versus CMOS - has CCD imaging come to an end?, 2001, N. Blanc

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Figure 2.3 CCD Photodiode Array Intergrated Circuit

This image shows how pixels (Photosites) are arraigned in rows and columns on a grid. As Light hits the Photosensitive area sometimes called Photodiode, Photogate or photocapacitor electrons are generated and accumulated and held at a potential well. This accumulated charge is dependent on the level of intensity of light and the type of photodetector. The accumulated charge or coupled charge (the term Charged-Coupled Device) is transferred from each pixel row, in parallel into the serial register where they are moved down the column to the output amplifier, as illustrated using the old-fashioned fire department's bucket brigade below (Figure 2.4).

Figure 2.4 Bucket Bridge CCD Analogy

Charge Injection Device (CID)

History

In the 1970s the basic structure and concept of the CID was first drawn up by scientists at the General Electric Company. They were working on to devise a semiconductor memory chip which exploited the photosensitive characteristics of silicon, they developed a simple X,Y addressable array of photosensitive capacitor elements, and evolved the first CID camera in 1972. [6] CID overview provided by: CID Technology, 101 Commerce Blvd, Liverpool, NY 13088 (315) 451-9421

Anatomy

Charge Injection Device (CID) are silicon-based integrated circuits consisting of arrays where each pixel is individually addressed using electrical indexing of row column electrodes. Charge is not transferred from site to site in CID arrays as compared to CCD where collected charge is transferred out of the pixel during readout. Instead, a displacement current proportional to the stored signal charge is read when charge "packets " are shifted between capacitors within individually selected pixels. [7] This displacment current is then amplikfied, converted into voltage and fed to the output in the form of a digitised signal. The charge reamins intact in the pixel after the output is determined and is reset by momentarily switching the row and column eleectrodes to ground, this relesesing the charge back to the substrate. This is called non-destructive readout using charge injection. By suspending the charge injection, the user initiates "multiple-frame integration" (time-lapse exposure) and can view the image until the optimum exposure develops. Integration may proceed for milliseconds or up to several hours with the addition of sensor cooling, applied to retard accumulation of thermally-generated dark current.[7] The nondestructive readout capapbilty of CID cameras makes it possible to introduce a high level of exposure control and Controlled integration is useful for scientific and photographic applications, especially in astronom. The image (Figure 2.5) below shows a SpectraCAM86 which uses the CID principle

Figure 2.5 SpectraCAM86 , purged camera head [7]

Complementary Metal Oxide Semiconductor (CMOS)

The word "complementary" refers to the fact that the chip design uses pairs of transistors for logic functions, only one of which is switched on at any time.

The phrase "metal-oxide-semiconductor" refures to the field effect transistors having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. The word metal is used for the modern descendants of the original process,which now uses different metals and polysilicon.

History

The CMOS image sensor was first invented in 1968 in the form of a sillicon array combined with an 'active buffer' transistor per pixel. This new way to fabricate image sensors with standerd intergrated curcuit lines enabled analogue and digital electronics on-chip. Below (Figure 2.6) is a example of a simple CMOS design performing amplification and charge detection in each pixel, which enabled direct signal buffering and random access readout of the array. This simple design could then be further upgraded with timing and control electronics to provied a fully digital interface with each pixel analogue to digital conversion.

Figure 2.6: Standard '3-T' pixel layout. Reset transistor 'R' clears the pixel of integrated charge, Source Follow transistor 'SF' amplifies/buffers the signal and Row Select transistor

Unfortunately, all this intelligence comes at a price. By introducing extra parts and capabilities on-chip and in-pixel, complications arise in the form of noise (fixed pattern noise). This noise can come from the differeces in transistor performance at the pixel level and pre-column. This fixed pattern noise and large pixel size relavitve to CCD ment that CMOS was inistialy rejected as scientific imaging devices. However, the limitations of the CCD have become more apparent as the demand for high speed, large area, low power and low cost imaging has increased since the 1990s. The performance of CMOS APS has been steadily improving over this period and they are now gaining popularity as they can provide solutions for both scientific applications and industrial imaging. Also, an exciting prospect for the future is the ability to fabricate large scale imagers without loss of performance whose size is limited only by the CMOS wafer scale (20"). []

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Anatomy

The Complementary Metal Oxide Semiconductor (CMOS) uses a combination of p-type and n-type metal-oxide-semiconductor field effect transistors (MOSFETs). Presented in Figure 2.7 is a three-dimensional cutaway drawing of a typical CMOS active pixel sensor illustrating the photosensitive area (photodiode), busses, microlens, Bayer filter, and three support transistors. Active Pixel Sensor element in a CMOS image sensor contains an amplifier transistor, which represents the input device of what is generally termed a source follower.

Figure 2.7 Anatomy of the Active Pixel Sensor Photodidode

A major advantage that CMOS image sensors enjoy over their CCD counterparts is the ability to integrate a number of processing and control functions, which lie beyond the primary task of photon collection, directly onto the sensor integrated circuit. These features generally include timing logic, exposure control, analog-to-digital conversion, shuttering, white balance, gain adjustment, and initial image processing algorithms. In order to perform all of these functions, the CMOS integrated circuit architecture more closely resembles that of a random-access memory cell rather than a simple photodiode array. The most popular CMOS designs are built around active pixel sensor (APS) technology in which both the photodiode and readout amplifier are incorporated into each pixel. This enables the charge accumulated by the photodiode to be converted into an amplified voltage inside the pixel and then transferred in sequential rows and columns to the analog signal-processing portion of the chip.

This design enables signals from each pixel in the array to be read with simple x,y addressing techniques, which is not possible with current CCD technology.

The operation is as follows: first inistialze the reset transisstor in order to drain the charged

In operation, the first step toward image capture is to initialize the reset transistor in order to drain the charge from the photosensitive region and reverse bias the photodiode. Next, the integration period begins, and light interacting with the photodiode region of the pixel produces electrons, which are stored in the silicon potential well lying beneath the surface (see Figure 3). When the integration period has finished, the row-select transistor is switched on, connecting the amplifier transistor in the selected pixel to its load to form a source follower. The electron charge in the photodiode is thus converted into a voltage by the source follower operation. The resulting voltage appears on the column bus and can be detected by the sense amplifier. This cycle is then repeated to read out every row in the sensor in order to produce an image

The inset in Figure 1 reveals a high magnification view of the filters and microlens array. Also included on the integrated circuit illustrated in Figure 1 is analog signal processing circuitry that collects and interprets signals generated by the photodiode array. These signals are then sent to the analog-to-digital conversion circuits, located adjacent to the photodiode array on the upper portion of the chip (as illustrated in Figure 1

A closer look at the photodiode array reveals a sequential pattern of red, green, and blue filters that are arranged in a mosaic pattern named after Kodak engineer Bryce E. Bayer. This color filter array (a Bayer filter pattern) is designed to capture color information from broad bandwidth incident illumination arriving from an optical lens system. The filters are arranged in a quartet (Figure 2(a) and Figure 2(b)) ordered in successive rows that alternate either red and green or blue and green filters (Figure 2(a)). Presented in Figure 2 are digital images captured by a high-resolution optical microscope of a typical Bayer filter array and the underlying photodiodes.

The heavy emphasis placed upon green filters is due to human visual response, which reaches a maximum sensitivity in the 550-nanometer (green) wavelength region of the visible spectrum.

The shape of the miniature lens elements approaches that of a convex meniscus lens and serves to focus incident light directly into the photosensitive area of the photodiode. Beneath the Bayer filter and microlens arrays are the photodiodes

The white boxes are identified with the letters P and T, which refer to the photon collection (photosensitive) and support transistor areas of the pixel, respectively.

The remaining 30 percent (the smaller white box labeled P in Figure 2(c)) represents the photosensitive part of the pixel. Because such a small portion of the photodiode is actually capable of absorbing photons to generate charge, the fill factor or aperture of the CMOS chip and photodiodes illustrated in Figures 1, 2, and 3 represents only 30 percent of the total photodiode array surface area.

The consequence is a significant loss in sensitivity and a corresponding reduction in signal-to-noise ratio, leading to a limited dynamic range. Fill factor ratios vary from device to device, but in general, they range from 30 to 80 percent of the pixel area in CMOS sensors.

Compounding the reduced fill factor problem is the wavelength-dependent nature of photon absorption, a term properly referred to as the quantum efficiency of CMOS and CCD image sensors. Three primary mechanisms operate to hamper photon collection by the photosensitive area: absorption, reflection, and transmission. As discussed above, over 70 percent of the photodiode area may be shielded by transistors and stacked or interleaved metallic bus lines, which are optically opaque and absorb or reflect a majority of the incident photons colliding with the structures. These stacked layers of metal can also lead to undesirable effects such as vignetting, pixel crosstalk, light scattering, and diffraction.

Many CMOS sensors have a yellow polyimide coating applied during fabrication that absorbs a significant portion of the blue spectrum before these photons can reach the photodiode region. Reducing or minimizing the use of polysilicon and polyimide (or polyamide) layers is a primary concern in optimizing quantum efficiency in these image sensors.

CMOS is split into two: PPS APS

Passive Pixel Sensors (PPS)

The PPS is composed of a photo-detector and a switching metal oxide semiconductor field effect transistor (MOSFET).

Active Pixel Sensors

An Active pixel sensor (APS) is an imaging sensor consisting of a collection of integrated circuit containing an array of pixel sensors and other components. The active pixel sensor was descended from the MOS active pixel imaging sensors created in the 1960's.

We measure the linearity response, modulation transfer

function (MTF), noise power spectrum (NPS) and detective quantum efficiency (DQE) of both

sensors in order to determine whether the specified improvement in read noise leads to better

imaging performance.

We evaluated APS and PPS CMOS detectors with similar designs, except for the fill factor

and signal amplification mechanism. Counter to expectations, the DQE of the APS is actually

lower than that of the PPS [6]

Ultrasound-Modulated Optical Tomography (UMOT)

In ultrasound-modulated optical tomography, a laser beam is shone on a biological-tissue

sample and an ultrasonic beam is focused into the tissue sample to modulate the transmitted light. Light passing through the ultrasonic beam will be modulated (also called as tagged, or encoded) by the ultrasound. Because the location of the ultrasonic beam is known, the origin of the tagged light can be obtained. By detecting the ultrasound-tagged light, the optical properties (absorption, scattering) in the ultrasonic column can be deduced. In the tomography, the imaging signal is the intensity of the modulated light. At a position of the ultrasound, a measurement is made to detect the signal. Scanning the ultrasound, position-dependent signal intensities are detected and an image is thus obtained. At the position of the object, the signal intensity drops because of the absorption or scattering of the object. The scattered light from the tissue forms speckles in the space. Each speckle spot is a coherence area. The bright spot or the dark spot contains modulated light and un-modulated light. The detection of the signal is therefore aimed to extract the modulated components from the un-modulated background. The modulated components have frequencies equal to the ultrasonic frequency or the harmonics.

T- Ultrsound Transducer

FG - Function generators

AMP- Power Amplifier

PC - Personal Computer

All of the imaging techniques applied in ultrasound-modulated optical tomography are based on the detection of ultrasound-tagged light. Among these techniques, the parallel detection is the most efficient technique. However, in the parallel detection, because four or two acquisitions are needed to obtain imaging information of a location at the ultrasonic column and each acquisition has to collect sufficient photons to maintain enough SNR, the long acquisition time involved may lead to speckle decorrelation.

4.3 Conclusions

Using the speckle-contrast mechanism in ultrasound-modulated tomography, 2D images of biological-tissue samples up to 25 mm thick were obtained. The technique is superior 57 to speckle-contrast-based purely optical imaging because it can discriminate absorbent objects by taking advantage of ultrasonic resolution. Comparison showed that images obtained with this technique had better contrast than those obtained with parallel detection and that speckle decorrelation was less significant in the detection. In addition, the present technique was simple to setup. This technique, combining speckle contrast detection with ultrasonic modulation, provides an efficient method for ultrasound modulated tomography of biological tissues. .[5] ULTRASOUND-MODULATED OPTICAL TOMOGRAPHYFOR BIOMEDICAL APPLICATIONS ..JUN LI