This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Synthetic Aperture Radar is an active remote sensing system which has applications in oceanography, agriculture, ecology, geology, hydrology, military, etc. SAR systems are mounted on a satellite which moves in a particular direction with a particular speed. The movement of the airplane or satellite is used to increase the aperture of the SAR system. The main reason which gives SAR systems such diverse applications is that it has the ability to take images in all weather conditions and darkness. With the improvement of SAR technology larger areas are being imaged and the resolution of the images has increased. This causes larger images to be transmitted and stored. Due to the limited storage and/or down-link capacity on the airplane or satellite the data rate must be reduced. The data rate is proportional to the pulse repetition frequency (PRF), number of samples taken in each echo and the number of quantization bits.
It is possible to reduce data rate by changing these parameters but this decreases the sys- tem performance. For example reducing the PRF causes higher azimuth ambiguities, reducing the bandwidth of the system decreases the range resolution and decreasing the number of quantization bits increases digitization noise .
The only remaining choice is to compress the SAR image. SAR data is inherently complex but it is frequently converted to real data for interpretation by human observers or machine algorithms . However, for interferometric purpose the phase information is very important and needs to be preserved accurately.
In order to compress complex data two approaches can be used, the first time to compress magnitude and phase separately of the wave. In the second approach, the frequency spectrum of the image is shifted to all positive frequencies. After the inverse Fourier transform, the real part of the complex data carries both phase and magnitude information of the original complex image. This real data can be compressed as usual and after decompression an inverse procedure can be used to get the complex image back .
SAR images are different in nature from optical images. The differences between them can be summarized as follows,
SAR images are larger in volume. SAR images typically consist of 32 bit complex pixels with large dimensions.
The entropy of SAR images is higher than that of optical images .
SAR images transmit information smoothly in low frequency bands as well as high frequency bands.
Whereas optical images are generally low-pass with noise in high frequency regions .
SAR images have larger dynamic range than optical images.
Due to these differences, classical image compression techniques do not performance as well when applied to SAR images.
More appropriate approaches which take into account these differences, have to be used for SAR image compression. The high entropy and large dynamic range of SAR images result in very low compression ratios when lossless compression techniques are used . Thus if a small amount of information loss is acceptable, lossy compression techniques can be used.
Since some of the information is lost, it is important to decide which feature of the image should be preserved. These features can be one of the following , certain pixels of point targets, Edges, Areas or regions of common texture.
SAR designs and associated application systems have grown exponentially since the 1950's when Carl Wiley, of the Goodyear Aircraft Corporation, observed that a one-to-one correspondence exists between the along-track coordinate of a reflecting object and the instantaneous Doppler shift of the signal reflected to the radar by that object.
TABLE 1- HIGHLIGHTS OF SAR HISTORY 
This experiment concluded that a frequency analysis of the reflected signals could enable finer along-track resolution than that permitted by the along-track width of the physical beam itself, which is governed the performance of the real aperture radar (RAR) designs of that era.
This "Doppler beam-sharpening" concept was taken up by Goodyear Corporation and by a group at the University of Illinois. One major drawback was the implementation of a practical data processor that could accept wide-band signals from a storage device and hence carry out the necessary Doppler- frequency analysis at each resolvable picture element (pixel).
The University at Illinois group carried out an experimental demonstration of the beam-sharpening concept in 1952 through use of airborne coherent X-Band pulsed radar, "boxcar" circuitry, a tape recorder, and a frequency analyzer.
Table 1 provides a brief overview of SAR development and the timeline of its development. Industrial and military developments using airborne platforms were continued at Goodyear, Hughes, and Westinghouse Company. The Jet Propulsion Laboratory (JPL) at the University of Michigan, Environmental Research Institute of Michigan (ERIM), Sandia Laboratories, and others also began to explore this new technology. In 1974, engineers at JPL formed a cognitive alliance with a group of international ocean scientists led by the National Oceanic and Atmospheric Administration (NOAA) to determine if an ocean application satellite featuring a space-based SAR could be achieved. Till this point, the major emphasis of space-based remote sensing had been on land applications using visible and infrared sensors. The resulting NASA/NOAA alliance assembled a multi-agency, interdisciplinary group of engineers and scientists that focused on ocean and ice applications using active and passive microwave sensors that could collect data day or night with a general disregard for cloud obscuration. From the very beginning, when SEASAT was but a future mission study, this group met regularly as the SEASAT User Working Group chaired by NOAA's Dr. John Apel with JPL's Dr. Alden Loomis serving as his deputy and NASA's coordinator. The SEASAT User Working Group soon expanded to include international participation and continued seamlessly through the program, working diligently to gain the support and funding for such a mission, to define and guide the mission and systems development, and to establish the experiments that would validate the program. SEASAT [Lame and Born, 1982] operated successfully from late June to early October 1978, when it experienced a massive short circuit in the power system.
SEASAT was followed by the Shuttle Imaging Radar-A (SIR-A) and Shuttle Imaging Radar-B (SIR-B) flown in 1981 and 1984, respectively [Elachi et al., 1986; Way and Smith, 1991]. Both the SIR-A and SIR-B radars were variations on the SEASAT radar operating at L-band and HH (horizontal transmit, horizontal receive) polarization. SIR-B had the added capability of operating at different incident angles (the angle of incidence is defined as the angle between the radar line-of-sight and the local vertical at the point where the radar intersects the earth or ocean surface). With the exception of the Soviet 1870 SAR (not widely distributed), the 1980 s saw only Space Shuttle based, SEASAT derivative, space borne SAR activity. The 1990 s witnessed a significant expansion of SAR missions with the launch of five earth oriented SAR satellites along with two more Shuttle Imaging Radar missions, as well as the pioneering interplanetary use of the Magellan SAR to map Venus. The satellite systems ALMAZ [Li and Raney, 1991], European Remote Sensing [ERS-1, ESA, 1991], the Japanese Earth Resources Satellite (JERS)-1 [Nemoto et al ., 1991 ], ERS-2 [ESA, 1995 ] and RADARSAT-1 [Raney et al ., 1991], each operated at a single frequency and single polarization, like SEASAT. ALMAZ and RADARSAT-1 had the added ability to operate at different incident angles. RADARSAT-1 also has a frequently used ScanSAR mode, where the coverage swath extends up to 500 km. One of the most advanced SAR systems, the Shuttle Imaging Radar-C/X-band SAR (SIR-C/XSAR), was a joint NASA/German Space Agency/Italian Space Agency mission, flown in April and October 1994 on Endeavor [Jordan et al., 1991]. The system could be operated simultaneously at three frequencies (L, C , and X) with the C - and L-band having the ability to alternately transmit and receive at both horizontal and vertical polarization. By collecting a near- simultaneous and mutually coherent version of the scattered field in a minimum basis set of polarizations, this quadrature polarimetry or "fully polarimetric" capability allows for a more complete characterization of the target's scattering characteristics within the illuminated resolution cell area [Zebker and Van Zyl, 1991]. The C- and X-band portions of the SIR-C radar were again flown in 2002 for the Shuttle Radar Topography Mission (SRTM). During this flight, a second receiving antenna was placed at the end of a 60-m mast, extended perpendicular to the main radar antenna. The purpose of the mast antenna was to provide a second receiving point in space for each radar pulse. The slight variations in phase, between the receipt of the radar pulses at each of the antennas, will be processed into a height measurement of the reflecting point on Earth's land surface. [Rosen et al.,2000] Future SAR missions are expected to provide enhanced capabilities, where the radar can be operated in several collection modes. ESA's ENVISAT, the follow-on to ERS-1 and ERS-2, was placed in orbit in March 2002. Like ERS, ENVISAT's ASAR (Advanced SAR) radar operates at C-Band, and has the added capability to collect data in pairs of four polarimetric combinations, as well as operate in a wide-swath (> 400 km) mode. The ENVISAT ASAR has polarimetric diversity, but is not capable of quadrature polarimetry. The upcoming RADARSAT-2 and Japan's ALOS Phased Array L-band SAR (PALSAR) instruments will both have fully polarimetric and ScanSAR operating modes. Table 1.2 provides a list of the earth orbital SAR missions and some of their characteristics. Additional information on these systems and their image products is given in Appendix A of the Manual.
SEASAT began the evolution of space-based SAR that continues t o this day. The active international participation of the SEASAT User Working Group led to international cooperation in the form of data collection and processing facilities and experiments with ground truth. In retrospect, the most important result of these cooperative interactions was the zealous expansion of the SAR technology in space. As a result, more than two decades of data collection have provided a rich data source , with each system adding unique characteristics in terms of applications, radar design, and mission data collection parameters.
PRINCIPLE OF SAR IMAGING
Assume a planar formation as depicted in Figure 1 a), given in the coordinates x and y. A radar sensor at a known location on the x-axis transmits a short pulse and receives the echoes reflected by an object of the scene. The SAR-system stores the received signals in a two-dimensional data array which is parameterized in radar position and echo signal delay which is denoted by t in the figure. The proportional relation of the quantities such as delay t and the object distance r allows the use of the distance parameter r instead of the parameter t. The distance of all scene elements and the echoes of the wave superpose each other and result in the recorded data column. The data column contains a range profile.
Figure 1: Two-Dimensional SAR Geometry: a) data column of a distributed scene for a fixed antenna, b) data of point targets for a moving antenna.
Below, we consider a simplified scene as depicted in Figure 1 b). Three point targets are given at different positions. The antenna moves in the direction of the x-axis. The velocity of the radar signals is very enormous when compared to the sensor velocity, the geometry can be assumed to be stable for one transmit-receive cycle. Quasi, the antenna moves step like. A hyperbolic range history in the data array results for each reflector like it is indicated by the curves in Figure 1 b). Points will be generated in the second array of data at the positions of the hyperbola vertex. If the signal intensity of the individual echoes result from the reflectivity of the scene points which in turn controls the brightness of the points in the second data array, an image of the scene is displayed. For a realistic scene one can imagine, that it is composed of innumerable point targets.
Optimized algorithms are formulated for better image generation which focuses on the distributed energy of the raw data for all scene locations simultaneously. The achievable resolution is denoted by the maximal distance of two identical point targets which can be separated in the SAR image. Since slant range resolution (resolution in y-direction) and the resolution in flight direction (x-direction) are both dependent on different facts.
Using short unpopulated pulses the contribution leads to a range resolution which is proportional to the pulse duration of the wave. In the case of rectangular pulses the pulse duration is inversely proportional to the signal bandwidth. A precise and apt analysis shows that the essential criterion for the achievable resolution is not the time duration but the bandwidth of the processed signal if the possibility of pulse compression is tapped.
Today's radar systems commonly use long pulses with large bandwidth because they offer high resolution on one side and a convenient power ratio on the other. The distance which has been covered by a reflected radar signal can be determined very precisely except for a case when it's a multiple of its wavelength. Thus, the range variation is caused by variation of the geometry can be measured by periodical pulse repetition. The accuracy of this measurement is better in the smaller signal wavelength than the double wavelength. The thickness of the hyperbolas in Figure 1 b) can be chosen proportional to the wavelength to illustrate the influence of the measurement accuracy. If we analyze the hyperbolic range histories of two point targets slightly shifted in x-direction, two factors can be identified which affect the reparability of the hyperbolas. If the lines become too thick, the hyperbolas merge and cannot be divided.
On the other hand, limitations of the data array in flight direction as is depicted in Figure 1 b) by the dashed box, cause a reduced reparability of the targets. A mathematical analysis tool confirms that these thoughts with the acquired result that the resolution in flight direction is approximately proportional to the quotient of the wavelength and the aspect angle interval from which the radar signals are reflected by the scene images of the system.
Two consequences are since the observation angle cannot be arbitrarily increased, the resolution in flight direction is of a fixed limit. Operating a SAR system in the common strip-map mode with a fixed antenna look-direction wave relating to the flight direction, the antenna beam pattern determines the observation angle (equals the angle of beam). A range independent resolution results. The observation and calculated time of an individual scene scatterer (reflecting element) increases proportional to the distance .
Now we extend the model to a three-dimensional geometry to check the status and what happens in the case of imaging a special scene reflectivity. Scatterers of the scene which will be carried on to the SAR sensor at the same time in the same distance feature identical range histories and therefore superpose in one image pixel in the experiment. The interpretation of SAR images has to account for this effect.
The design of modern SAR systems demands for increasing resolution capabilities. The limitation which is induced by the operation which is carried out in stripmap mode has become a problem. By using a small antenna the angle of beam and so the observation angle is increased to a larger value and hence the antenna gain would decrease and the operational range recedes. Moreover, a short antenna demands for short distances between subsequent pulses which also in turn lead to range limitations values. Thus, the realization of long range systems is impossible for this experimentation using this operational mode. The operation is carried out in spotlight mode which solves the problem: While the sensor platform passes the scene, the antenna beam is steered so that the picture will be illuminated over a large aspect angle. By this means both demands can be satisfied: The operation of a high gain antenna and the theoretical resolution which can be reached observing the scene over 180Â° aspect angle is a quarter of the wavelength. The observation is carried over a large aspect angle. Consequently most SAR systems are currently under development platform which are incorporate the spotlight mode.
Illustration of a typical space-based, strip -map, monostatic SAR is shown in Figure 2 [McCandless, 1989]. Consider the string of dots in Figure 1 as a set of positions at which the SAR transmits a pulse. Each pulse travels to the target area where the antenna beam intercepts the earth and illuminates targets at that location, and the reflected return pulses are in turn collected by the same antenna. SAR "works" because the radar pulse travels to and from the target at the speed of light, which is much faster than the speed of the spacecraft [Harger, 1970]. The SAR system saves the phase histories of the responses at each position as the real beam moves through the scene and then weights, phase shifts , and sums them to focus on one point target (resolution element) at a time and suppress all others. The SAR image signal processing system performs the weighting, shifting , and summing to focus on each point target in turn . It then constructs an image by placing the total energy response obtained in the focusing on a particular target at the position in the image corresponding to that target. SAR achieves a very high signal processing gain because of coherent (in-phase) summation of the range-correlated responses of the radar. All of the signal returns that occur as the real beam moves through each target, as shown in Figure 1, can be coherently summed. In many instances, thousands of pulses are summed for each resolution cell resulting in a tremendous increase of the target signal compared to that from a single pulse (a coherent benefit of approximately 4000 for SEASAT SAR). The power from a given scatterer, spread across many pulses (as illuminated) is focused (concentrated) into a single location through processing.
Figure 2- Basic Principles of Aperture Synthesis
To get some idea of the scale of the data collection and processing challenges faced by space borne SAR system designers and operational planners, consider the general magnitude of these subjects for the systems listed in Table 2. These SARs transmit more than a thousand pulses per second; illuminate tens of millions of resolution cells (pixels) in the radar beam at each pulse time; utilize a space borne platform travelling in excess of seven thousand meters per second; and require thousands of processor operations per cell to resolve an image. When SAR was first introduced on a space platform by SEASAT, optical processing techniques were the touchstone of image generation. Digital processors were limited in numbers and capability and the first digital processors used for SEASAT image generation required 20 hours of processing time to convert an 18-second data collection into a 100-kilometer by 100- kilometer image frame. Although this significant time delay has disappeared as digital technologies and processing skills have advanced, SAR data collection and processing still present great challenges to the satellite and mission designers and are one of the salient yardsticks of program performance and resource allocation. As indicated, the SAR requires the collection and processing of phase-coherent data and, for an isolated target, the phase history during the integration time follows a complex (higher order) phase function. For descriptive convenience, it is assumed that the radar antenna beam axis is oriented at right angles (broadside) to the radar platform velocity vector. This is the basic orientation of the strip-map SAR systems listed in Table 2.
In Figure 2, the antenna beam illuminates the target when the platform reaches position t1, but not before. It continues to illuminate the target for a distance LSA (synthetic aperture length) until it reaches t2. The time required to translate the along-track beam through a point target is called the integration time or dwell time and is defined in the figure. Figure 1 provides a heuristic derivation of the spatial performance result of this process: viz., the spatial resolution in the along-track direction approaches the physical length of the antenna divided by two. This result is, to the first order, independent of the wavelength of the radar and the range from the radar to the target. Unlike the foregoing RAR along-track spatial resolution, the antenna dimension is now a direct, not an inverse, relationship. As a seeming paradox to diffraction limited performance, smaller apertures produce better spatial resolution. All or part of the available phase history can be coherently processed. If all of the pulses are used, the result is referred to as single look, one look, or fully focused processing, achieving a spatial resolution approaching Â½ DAT.
Since the along-track resolution is independent of target range. It is clear in the next section that there are serious application constraints that limit taking along-track spatial performance too far. These constraints manifest themselves by placing unacceptable limits on important application goals such as area coverage and illumination geometry. There are also associated technology limitations that set application limits. Important technology limitations include data collection rate and volume, and limiting antenna design factors, such as pulse power, phase control and calibration.
Figure 2 further illustrates the imaging geometry of a strip-mapping SAR using the illumination geometry of the first space-based SAR, SEASAT. As shown, the length of the synthetic aperture is a function of the beamwidth of the real-aperture, 16 km for SEASAT. In the range direction, the width of the beam as it intercepts the earth is a function of the diffraction-limited beam in the range direction and the illumination geometry. The illumination geometry can be described or specified by either the look angle (nadir angle) at the radar, 23 degrees in the case of SEASAT, or by the grazing angle where the radar beam intercepts the earth's tangent plane. Sometimes the complement of the look-angle, the depression angle, is used for definition or , in the case of the target plane, the complement of the grazing angle (referred to as the incident angle) is used as the reference.
Figure 3- SEASAT SAR Imaging Geometry
Using the grazing angle reference, Figure 3 provides an expression for swath-width first in terms of diffraction limited beam intersection with the earth and second in terms of the differential range of the target space (the far range minus the near range, Rf - Rn). To image the target space illuminated by the radar, each radar pulse will transit the differential range distance twice, first as a transmit-transit and then as a return-transit after target(s) interaction.
RESEARCH AND FUTURE TRENDS
In the 25 years since SEASAT, SAR observations of the world's oceans have progressed from a demonstration of capability to near real-time support for operational ice analyses and charting. ERS 1 , ERS -2, and RADARSAT-1 have compiled vast archives covering all of the world's oceans, providing observations of surface waves, internal waves, currents, eddies, upwelling, shoals, sea ice, rainfall, the atmospheric boundary layer phenomena , and ships. The evolution sparked by SEASAT continues with ever more advanced SAR satellites beginning with the recent launch of ENVISAT (Europe), followed by ALOS (Japan) and RADARSAT-2 (Canada) in the 2004/ 2005 time period. These entries promise finer resolution and increased polarimetric capability coupled with advanced data collection and processing techniques (e.g., interferometry) and they will expand SAR capabilities both for the traditional SAR users and for an expanding community of users
APPLICATION OF SAR TECHNOLOGY
The idea is to mount a radar on a car. Since the 1960s, various ideas and schemes have been developed and tested, but the devices were either too huge or too expensive for mass production. At the time of its built, several manufacturers around the world have developed small and lightweight devices, and efforts at making them affordable for the average driver are underway.
The main advantage of radar over other devices, such as laser or infrared vision equipment, is the radar's ability to look through enivironmnetal effects such as rain, fog and snow. Current devices are more or less concentrating on Adaptive Cruise Control (ACC) and collision warning, with features such as 'collision avoidance' or 'autonomous driving' still on the list of things to come.
Functions of Automotive Radars
Navigation - The radar indicates road bends and intersections in bad weather. Coupled with an electronic road map and a GPS receiver if possible , it will also give directions to your destination.
Collision warning - The radar continously scans the area ahead of the car and takes appropriate action when a collision is imminent in its presence. A collision may occur with stationary objects found on the road, with cars driving in the opposite direction, or with a car driving in the same direction but slowing down or in any direction . 'Appropriate' action can be the activation of an alarm or even the hitting of the brakes of the car. Implementation of the latter function is dependent on local legislation. Usually, all responsibility rests on the driver's shoulder but, if such a drastic interaction is done without the driver's consent, they may plead 'not guilty' and claim lawsuits against the car or radar manufacturer.
Cruise control - The radar maintains either a preset constant speed, or a constant distance between the car ahead.
Airbag pre-crash sensing - If it is determined that a collision is imminent, then the radar may set off the airbag in front of riders with an optimised timing, even before the hostile object has made contact with the car body.
'Stop and go' functionality - This is an adaptation of cruise control for lower speeds in car control in dense urban traffic. The driver may relax or read a newspaper while an observer would believe the car has been towed by the car in front.
Parking aid - A minimum clearance between the boundaries of the car or object can be maintained by the radar starts buzzing and blinking well before contact is made. Depending on the implementation strategy even the car's sides can be put under surveillance.
There are two sets of requirements for an automotive radar. Cruise control and other 'highway' functions demand the capability to measure ranges of around 200m and velocities from zero up to some 500km/h if a scenario of two Ferrari's shortly before collision were set as the worst case scenario . The radar should cover some 8Â° to 15Â° either side of the front direction, to be aware of adjacent lanes and to follow road bends. Phased array antennae are still quite expensive. For this very reason, arrangements with several fixed antenna beams and performing monopulse evaluation between the outputs are also under consideration. Angular resolution is a critical parameter and some 0.1Â° are deemed necessary.
The second set of requirements can be applied to traffic in urban environments. Velocities from zero to 140km/h is in the range of expected values. Angular resolution is less critical but the radar must cover significantly more than some 15Â° of azimuth when driving in a stop-and-go situation. Full 360Â° azimuth coverage is required when the radar is supposed to sense a side-crash too. The maximum range is of the order of 70m, but distances of a few centimetres must be measurable when the radar is used in a parking aid .
Detection of Obstacles
At first glance, anything that features different reflection properties than the concrete of the road must be either another vehicle or something dangerous. Examples are sheets of ice, rein deers, rabbits, kangaroos or goods that have fallen off some lorry or straying on the road.
However, this approach is too easy. Road surfaces can exhibit lots of non-regular features that a radar can easily detect and save, but do not represent danger. Such non-obstacles are, among others: road markings ,changes of the material the road surface is made of, duct covers, and metal bars that protect the air gap where a bridge begins or ends. A device that announces every such anomality as an obstacle would not be successful in the market scene because of its high false alarm rate.
Layers of hot air and water puddles reflect radar signals reflect light. Hence what the radar 'sees' is a upside-down mirror image of the scene further ahead, rather than the puddle of water itself. Therefore a puddle of mud or water can be easily mistaken for a very deep hole in the road.
On the flip side, the signal processing may take advantage of mirages because they provide a means to look at the car in front by examining the road under its belly from a grazing angle.
Bridges and Tunnels
Imagine a scene where the road runs over the top of a hill and there's a bridge that spans the road at right angles to each other. At first glance, the bridge would appear like a wall that has been placed .The free space under the bridge becomes visible only at closer distance.
The Synthetic Aperture Radar (SAR) is an all-weather imaging tool that achieves fine along track resolution by taking the advantage of radar motion to synthesize a large antenna aperture. SAR technology has provided information about terrain structural to geologists for oil spill boundaries on water to environmentalists ,mineral exploration, sea state and ice hazard maps to navigators, and targeting information to military operations. This paper serves as a basic study of SAR. The principle of SAR is described and basic SAR signal processing concept is discussed. Besides the overview of design consideration for SAR system is also highlighted.