# Air Borne And Space Borne Sar Data Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The data acquired by the air-borne and space-borne SAR sensors contain many ambiguities and need to be properly calibrated (pre-processed) to use it for different applications. Preprocessing of the SAR data includes conversion from slant range to ground range, radiometric calibration, speckle suppression and SAR image geocoding. As we know, Radar equation is the basis for radiometric calibration and is explained in the previous chapter.

The acquired datasets of air-borne and space-borne SAR sensors are converted to ground range images using the incidence angle data at that acquisition and Eqn 1 discussed in Chapter - 2. Data thus obtained is in ground range and now the calibration of the power Single look complex to backscattering co-efficient (σ0) is carried out.

The data calibration parameters are provided, for some sensors (ALOS-PALSAR), in the leader file; in other cases the calibration parameters are annotated within ancillary files (e.g. XCA file for ENVISAT ASAR data and file for ALOS-PALSAR data).

The calibration and analysis of space-borne sensors was carried out using the SARscape software which involves corrections for the scattering area, antenna gain pattern and the range spread loss. Here, we discuss the calibration of SLC and power images differently with respect to airborne and space-borne sensors.

## RADIOMETRIC CALIBRATION OF SAR IMAGES

Radiometric calibration includes the conversion of the data into 32 bit signed integer format.

The radiometric calibration of the SAR images involves the corrections for:

Scattering area: Each output pixel is normalised for the actual illuminated area of each resolution cell, which may be different due to varying topography.

Antenna gain pattern: The effects of the variation of the antenna gain in range are corrected, taking into account topography (DEM) i.e., SRTM DEM resampled to 25m pixel size.

Range spread loss: Received power must be corrected for the range distance changes from near to far range.

The basis of the radiometric calibration is the radar equation. The formulation of the received power for distributed scatters Pd for a scattering area, A after Freeman (1992) and van Zyl et al. (1993) can be modified as:

------------- (7.8)

Pt = transmitted power

GAr = transmitted and received antenna gain in range

GAz = transmitted and received antenna gain in azimuth

GE = electronic gain in radar receiver

GP = processor constant

R = range

Ls = system loss term

λ = wavelength

= antenna elevation angle

= antenna azimuth angle

= radar reflectivity

= local incidence in range

= local incidence in azimuth

pr = image pixel dimension in range

pa = image pixel dimension in azimuth

Proper calibration is necessary to determine θel, θaz, θir, θia. The first two angles allow, using the corresponding antenna diagrams, to correct for the antenna gain, while the following two to calculate the scattering area A. These quantities depend upon:

real antenna position

real antenna pointing direction

pixel position on the ground

In the case of spaceborne SAR, due to the stability of the satellite, ideal (processed) and real (actual) antenna position are the same, while actual antenna pointing is defined by radar look angle (the antenna azimuth angle usually set to zero) and antenna depression angle.

In the case of airborne SAR, data must be focused considering an ideal flight path. Each pixel is then terrain geocoded by means of the usual range-Doppler equation referring to the ideal position. However, in radiometric terms, the pixel refers to the actual antenna position, i.e. to the location where the antenna effectively transmitted and received. For this reason, the mentioned angles must be determined taking into account the actual antenna position and pointing. In this way, antenna attitude (i.e. roll, pitch and yaw) variations, whose radiometric effects can result in some dB, are fully taken into account. Furthermore, since each pixel position on the ground is associated with the elevation, topographic effects are straightforward included in the radiometric calibration.

Single look complex data of the ASAR images are converted to the power images using the multi-looking technique. Multi-look processing refers to the division of the radar beam into several narrower sub-beams. Each sub-beam provides an independent look at the illuminated scene. Each of these looks will also be subject to speckle, but summing and averaging them together to form the final output image will reduce the amount of speckle. Multi-looking is done during data acquisition.

RELIEF CORRECTION: The radar platform was moving from the bottom to the top of the image and was looking towards the left side; the radar incidence angle varies from 24-71 from the right to the left of the image. The SRTM DEM which was resampled to 25 m enables compensation for the relief effect of the co-registered SAR data. Slope angles in range direction were computed from the DEM.

All images in the dataset were re-projected to a common UTM Zone 43 / WGS84 coordinate system, orthorectified, and co-registered.

## MULTI-LOOKING OF SAR IMAGES

A multi-look image was generated from an SLC image by averaging the power (square of absolute value of the complex image) across a number of lines in both the azimuth and range directions. The output image has reduced spatial resolution, but the typical SAR image speckle noise is also reduced. The multi-look intensity image is created with number of looks in azimuth and range. The number of looks changes from sensor to sensor and depends on different pixel sampling and/or in different incidence angles For ASAR it is 7 azimuth looks and 2 range looks. The multi-looking factors have to be changed accordingly. The number of looks was chosen in order to obtain a sampling of the multi look image which gives almost square pixels (similar ground range and azimuth pixel size).

## SELECTING THE NUMBER OF LOOKS

The appropriate number of looks can be calculated from the SLC image parameters. Note that, assuming a flat earth, the use of the look angle is equivalent to the use of incidence angle to calculate the multi-looking factors.

Formula given below is used to calculate the pixel spacing in ground range:

----------- (7.9)

## PREPROCESSING STEPS:

The pre-processing of SAR data includes speckle suppression, conversion of DN values to backscattering co-efficient images, and reprojection of the SAR images. After correction of the SAR images for system errors, the data is further radiometrically processed.

In the case of SAR, speckle reduction is an elementary operation in many applications. The speckle reduction can be performed at various stages of the processing; depending on the application. It is advisable to speckle filter before geocoding which implies improved object identification for GCP measurements (Dallemand et al. 1992). The filtering and resampling may also be carried out during the geocoding process in one step to reduce the number of times the data is resampled.

All SAR images should be speckle suppressed to derive the meaningful information. Speckle is due to the variation in backscatter for non-homogenous cells. It gives a grainy appearance to the Radar images. It is caused by the high coherence of the illumination source that causes phase interference from random scattering points. It is the unwanted and dominating noise. It degrades the SAR image products. This is caused by random constructive and destructive interference from the multiple scattering returns that will occur within each resolution cell. It is also useful in describing the texture of image, identifying terrain features, examining the reflectivity and system transformation processes.

Various filtering techniques are used for speckle suppression. Speckle reduction by spatial filtering is performed in a digital image analysis environment. Speckle reduction filtering consists of moving a small window (kernel) of a few pixels in dimension (e.g. 3x3 or 5x5) over each pixel in the image, applying a mathematical calculation using the pixel values under that window and replacing the central pixel with the new value. The window is moved along in both the row and column dimensions one pixel at a time, until the entire image has been covered. By calculating the average of a small window around each pixel, a smoothing effect is achieved and the visual appearance of the image improves.

## SPECKLE SUPPRESSION

Both multi-look processing and spatial filtering reduce speckle at the expense of resolution, since they both smoothens the image. Therefore, the amount of speckle reduction desired must be balanced with the particular application and information required. If finer details and high resolution are required then little multi-looking/spatial filtering should be done. If broad-scale interpretation and mapping is the application, then speckle reduction techniques may be more appropriate and acceptable.

Different kinds of filters such as Lee filter, Enhanced Lee filter Gamma filter, Frost filter, etc. are used for speckle suppression.

Lee Filter: The unspeckled pixel value is a weighted sum of the observed (central) pixel value and the mean value. The weighting coefficient is a function of local target heterogeneity measured with the coefficient of variation.

Lee filter smoothed the image data, without removing edges or sharp features in the image. Speckle noise in SAR images is generally assumed a multiplicative error model. In the Lee filter, the multiplicative model is first approximated by a linear model. Then the minimum mean square error criterion is applied to the linear model.

The resulting grey-level value R for the smoothed pixel is:

------------- (7.10)

where: is the weighting function

is the estimated noise variation coefficient.

is the image variation co-efficient; Cp is the center pixel of filter window

Mean is the mean value of intensity within window

SD is the standard deviation of intensity within window

Enhanced Lee filter (Lopes at al., 1990): The Enhanced Lee Filter (Lopes et al., 1990) minimized the loss of radiometric and textural information. Enhanced Lee filter was modified by Lopes. He proposed to divide an image into areas of three classes. The first class corresponds to the homogeneous areas in which the speckles may be eliminated simply by applying low pass filter (or equivalently, averaging, multi-look processing). The second class corresponds to the heterogeneous areas in which the speckles are to be reduced while preserving texture. And the third class are areas containing isolated point targets, which, in this case, the filter should preserve the observed value.

The resulting grey-level value R for the smoothed pixel is:

R = Mean for ≤ -------- (7.11)

R = Mean * (exp (-damp (/ (-+Cp * (1- (exp (-damp ( for <<------ (7.12)

R = Cp for ≥ ------- (7.13)

Frost filter (Frost at al., 1982): Frost filter is an adaptive filtering algorithm that can be applied to any type of image data. It uses an exponentially damped convolution kernel which adapts itself to features based on local statistics. The Frost filter differs from the Lee with respect that the scene reflectivity is estimated by convolving the observed image with the impulse response of the SAR system. The impulse response of the SAR system is obtained by minimizing the mean square error between the observed image and the scene reflectivity model which is assumed to be an autoregressive process.

The implementation of this filter consists of defining a circularly symmetric filter with a set of weighting values M for each pixel:

---------- (7.14)

T = the absolute value of the pixel distance from the centre pixel to its neighbours in the filter window

DAMP = exponential damping factor

The resulting grey-level value R for the smoothed pixel is:

R = (P1*M1 + P2*M2 + ... + Pn*Mn) / (M1 + M2 + ... + Mn)

where:

P1... Pn are grey levels of each pixel in filter window

M1… Mn are weights for each pixel

Gamma Map filter (Lopes at al., 1990): The Gamma map filter was first proposed by Kuan. The scene reflectivity was assumed to be Gaussian distributed. However, this is not quite realistic since it implicitly assumes a negative reflectivity. Lopes modified the Kuan map filter by assuming a gamma distributed scene and setting up two thresholds.

A priori knowledge of the probability density function of the scene is required to apply the MAP (Maximum a posteriori) approach to speckle reduction. With the assumption of a gamma distributed scene, the Gamma MAP filter is derived in the following form:

R = Mean for ≤ ------------- (7.15)

R = Rf for < < ----------- (7.16)

R = Cp for ≥ ----------- (7.17)

where: Rf = (B*Mean + SQRT(D))/(2*A)

Mean = mean value of intensity within window

Cp = center pixel in filter window

B = (1+) / (-) -NLOOK-1

D = Mean*Mean*B*B + 4*A*NLOOK*Mean* Cp

## Methodology:

## PRE-PROCESSING OF AIR-BORNE ESAR AND SPACE-BORNE ASAR DATA

## (A) Speckle suppression and radiometric calibration of airborne DLR-ESAR data:

Speckle suppression was carried out using different filtering techniques such as Frost, Gamma and Enhanced lee filter with different window sizes 3x3, 5x5 and 7x7, 11x11. It is observed that the Enhanced Lee filter of window size 7x7 was suitable for the Rajpipla study area as the information loss and smoothening effect is comparatively lesser. Frost filter has not suppressed the speckle effectively. The areas were still brighter in the image. Lee filter and Enhanced lee filter performed in the same way and suppressed the speckle effectively. The image sharpness is retained in the enhanced lee filtered image. Hence, Enhanced lee filter with window size 11x11 is chosen for the Rajpipla study area (figure 7.2)

The local incidence angle and the backscatter intensity are used to calibrate the backscattering coefficient by the following equation

σ0 = [10.0*alog10 (Pixel value) 2-A0]*sin θ-------- (7.18)

where, A0 is the scaling factor

sin θ is the local incidence angle.

Fig 7.2: Different filters applied to C-band ESAR-VV dataThe scaling factor is 60. The sigma nought values obtained are in decibels (dB).

Sigma Nought (σo) - Backscattering coefficient is the conventional measure of the strength of radar signals reflected by a distributed scatterer, usually expressed in dB. It is a normalized dimensionless number, comparing the strength observed to that expected from an area of one square metre. Sigma nought is defined with respect to the nominally horizontal plane, and in general has a significant variation with incidence angle, wavelength, and polarization, as well as with properties of the scattering surface itself. The SARscape calibrated value can be transformed into db units by applying 10*log10.

It is observed that the backscattering co-efficient of the vegetated areas is greater than the non-vegetated areas.

Fig 7.3: Different filters applied to ENVISAT - ASAR HH data(B) Preprocessing of ENVISAT-ASAR Precision Image data (PRI) Amplitude data: Speckle suppression is carried out using the Gamma filter, Frost filter and Enhanced Lee filter. Backscattering coefficients were calibrated with SARSIGM s/w (PCI Geomatica), by using the following expression

-------------- (7.19)

Where, σ0 (i,j) is the output backscatter coefficient for scan line i, pixel j.

Aj is the gain scaling table value for column j. A0 is the gain offset.

Lee filter showed no effect on the image. Some features were not visible in the lee filtered image but were clearly visible in enhanced lee filtered image. Sharp edges were retained in Enhanced lee filter and hence this filter is used for the speckle suppression. It is observed that the averaging of pixel values is more in Gamma and Frost filters than Enhanced Lee filter. The discrimination of the forest edges with agricultural areas were appropriate in enhance lee filtered image. Hence, enhanced lee filter of the window size 7x7 is chosen for the study areas Dandeli, Rajpipla and Bilaspur for the ENVISAT-ASAR images. Different types of filters applied to ENVISAT-ASAR data are shown in figure 7.3.

## GEOCODING OF SAR IMAGES:

Remotely sensed data usually contain both systemic and nonsystematic geometric errors. Some of the important systematic errors are scan skew, mirror scan velocity, panoramic distortion, platform velocity, Earth rotation, perspective, altitude, etc. Because of these geometric errors, the satellite data immediately after acquisition is not planimetrically true to the ground features and standard topobase. Hence, in order to measure/estimate area from the satellite data it has to be initially rectified to correct geometric errors and made planimetrically true to a standard topobase. The systematic errors can be corrected through analysis of sensor characteristics and satellite ephemeredes. These errors are corrected in the preprocessing of data after initial data acquisition from satellite data. However, nonsystematic errors caused due to attitude (pitch, roll and yaw) can be corrected only through the use of common Ground control points (GCP).

Rectification and re-projection of satellite imagery to a standard coordinate system is performed on all scenes. The re-projection allows the determination of geographic coordinates for features identified in the analysis and facilitates integration with other geographic data sets.

## SAR image geometry:

The removal of topographic induced radiometric distortions requires a high precision geocoding of the image information. This geometric correction has to consider the sensor and processor characteristics and must be based on a rigorous range-Doppler approach. An algorithm for SAR image geocoding, which is based on the range-doppler approach, enables to precisely geocode SAR data by means of a single, accurately located, GCP. The images were aqured with pixel accuracy even without the use of GCPs as the processing is performed using orbital parameters (so-called precise orbits, i.e. DORIS data in case of ENVISAT ASAR products) are available.

For each picture element the following two relations must be fulfilled:

--------- (7.20)

---------- (7.21)

Rs = slant Range

S, P = spacecraft and backscatter element position

vs, vp = spacecraft and backscatter element velocity

f0 = carrier frequency

c = speed of light

fD = Doppler frequency

Using these equations, the relationship between the sensor, each single backscatter element and their related velocities is calculated and therefore not only the illuminating geometry but also the processors characteristics are considered. Through this complete reconstruction of the imaging and processing geometry the primary topographic effects (foreshortening, layover) as well as the influence of Earth rotation and terrain height on the Doppler frequency shift and azimuth geometry are calculated.

SAR image geocoding: The principle of side-looking SAR is measurement of the electromagnetic signal round trip time for the determination of slant ranges to objects and the strength of the returned signal. This principle causes several types of geometrical distortions. Severe distortions occur if pronounced terrain relief is present in the imaged zone. The amount of distortion depends on the particular side-looking geometry and on the magnitude of the undulation of the terrain's surface.

Geocoding an image consists of introducing spatial shifts on the original image in order to have a correspondence between the position of points on the final image and their location in a given cartographic projection. Radiometric distortions also exist in connection with terrain relief and were not completely corrected. The use of Digital Elevation Model (DEM) was used to correct the terrain influence in SAR images.

The location of the (i,j) pixel in a given image can be derived from knowledge of the sensor position and velocity. More precisely, the location of the antenna phase centre in an Earth referenced coordinate system is required. The target location is determined by the simultaneous solution of three equations:

Range equation,

Doppler equation,

Earth model equation

Thus the SAR pixel location is inherently more accurate than that of optical sensors, since the attitude sensor calibration accuracy does not contribute to the image pixel location error.

DLR-ESAR data were registered with IRS-P6 LISS-IV as the master image by giving GCPs with image to image method. ASAR data were rectified by using the orbital parameters of the sensor. This gave the sub-pixel accuracy. The geo-registered data was used for vegetation classification and above ground biomass estimation. The ASAR data were ortho rectified using the SRTM DEM resampled to 25m pixel size. The terrain effect was also minimized by using the ortho rectification.

The images thus radiometrically processed, speckle suppressed and geocoded are analysed for the vegetation classification and estimation of above ground biomass that are discussed in next chapters.

Geocoding of the ASAR and PALSAR images are carried out with the orbital parameters of the respective satellites. This gave the error of less than one pixel.

## Geocoding

Due to the completely different geometric properties of SAR imagery in range and azimuth direction, across- and along-track directions have to be considered separately to fully understand the acquisition geometry of SAR systems. According to its definition, SAR images are characterised by large distortions in range direction. They are mainly caused by topographic variations and they can be relatively easily corrected. The distortions in azimuth are much smaller but more complex.

A backward solution, which considers an input Digital Elevation Model, is used to convert the positions of the backscatter elements into slant range image coordinates. The transformation of the three-dimensional object coordinates - given in a cartographic reference system - into the two-dimensional row and column coordinates of the slant range image, is performed by rigorously applying the Range and Doppler equations. This requires to know position and velocity vectors of both sensor and backscatter elements as well as Doppler frequencies and pulse transit times used for SAR image processing. Using the satellite tracking data, sensor positions and velocity vectors (state vectors) are computed for each azimuth position of the SAR image. Knowing the Doppler centroid, which is used as azimuth reference, the sensor position can be determined for any backscatter element; for each backscatter element with a corresponding estimated sensor position, the slant range and the Doppler frequency is computed considering the Range-Doppler equations (Meier et al, 1993):

where Rs is the slant range, S and P are the spacecraft and backscatter element position, vs and vp are the spacecraft and backscatter element velocity, f0 is the carrier frequency, c the speed of light and fD is processed Doppler frequency.

In general it shall be noted that: data with zero-Doppler and Non-zero-Doppler annotations are supported.

If accurate orbital parameters are considered, no GCP is required and pixel accuracy is obtained.

During the geocoding procedure, geodetic and cartographic transforms (refer to figure below) are considered in order to convert the geocoded image from the Global Cartesian coordinate system (WGS-84) into the local Cartographic Reference System (e.g. UTM-32, Gauss-Krueger, Oblique Mercator, etc.).

In a first step, i) range dependent radiometric losses and/or, ii) dielectric related effects on the radar backscatter, iii) absolute radiometric variations are derived in a statistical way from a multi-temporal geocoded, radiometrically calibrated (and optionally normalized) data set. In a subsequent step, the estimated two dimensional image dependent correction factors are applied on each image independently.

## Range correction

It can happen that SAR images, even after radiometric calibration and normalisation, remain affected by backscatter variations in range. The correction is done by identifying same land cover areas (in form of a shape file) in different range positions, which are used as reference in the correction process. The shape file can be made of a single large area or several small ones.

## Geocoding of ASAR data:

The precise orbital information for ENVISAT ASAR data is given by DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) data.

Using these orbital parameters, geocoding of ASAR data is carried out.

## From Single look complex to Detected (one look):

After look compression, each of the look images is detected, i.e. the data is converted from complex to real numbers (r2+ i2= P2). That is, the Power (or Intensity) of each complex sample is calculated. Note that the pixel of the Single Look Complex (SLC) and Power data does not have the same dimensions as the resolution cell during the data acquisition, due to the variation of range resolution with incidence angle.

Purpose of Multi-looking

The SAR signal processor can use the full synthetic aperture and the complete signal data history in order to produce the highest possible resolution, albeit very speckled, Single Look Complex (SLC) SAR image product. Multiple looks may be generated - during multi-looking - by averaging over range and/or azimuth resolution cells. For an improvement in radiometric resolution using multiple looks there is an associated degradation in spatial resolution. Note that there is a difference between the number of looks physically implemented in a processor, and the effective number of looks as determined by the statistics of the image data.

## Selection of appropriate looks for a sensor i.e., ASAR and PALSAR:

The number of looks is a function of

- pixel spacing in azimuth

- pixel spacing in slant range

- incidence at scene centre

The goal is to obtain in the multi-looked image approximately squared pixels considering the ground range resolution (and not the pixel spacing in slant range) and the pixel spacing in azimuth. In particular, in order to avoid over- or under-sampling effects in the geocoded image, it is recommended to generate a multi-looked image corresponding to approximately the same spatial resolution foreseen for the geocoded image product.

Note that ground resolution in range is defined as

ground range resolution = pixel spacing range

sin(incidence angle)