Information about land cover and land use is a very important component of the planning process and the need to modify land use as part of a regional plan, a resource development or management project, an environmental planning exercise, or as a baseline study of a region. Remote sensing has long been an important technique for monitoring changes and mapping land cover in the physical characteristics study of global change (Henderson-Sellers and Pitman, 1992). Since the launch of the first Earth Resources Technology Satellite in 1972 (ERTS-1, later renamed Landsat 1), there has been significant activity related to mapping land cover and monitoring environmental change as a function of anthropogenic pressures and natural processes. Land cover is a fundamental variable that impacts on and links many parts of the human and physical environments. Despite the significance of land cover as an environmental variable, our knowledge of land cover and its dynamics is poor (Foody, 2002).
Get your grade
or your money back
using our Essay Writing Service!
Remote sensing is an attractive source of thematic maps such as those depicting land cover as it provides a map-like representation of the Earth's surface that is spatially continuous and highly consistent, as well as available at a range of spatial and temporal scales (Treitz et al, 1992). Thematic mapping from remotely sensed data is typically based on an image classification. Information derived from remote sensing data has often been used to assist in the formulation of policies and provide insight into land-cover and land-use patterns, and multi-temporal trends in planning process.
Land-cover mapping from remotely sensed imagery is commonly achieved through the conventional hard image classification analysis. With hard classification, each pixel in the image is allocated to the class with which it has the similarity spectrally. The effect of this allocation process is to constrain the prediction of inter-class boundaries to lie between pixels. Since the size and location of pixels in an image are independent of the ground cover mosaic, it is unlikely that the pixels are arranged in such a manner that their edges correspond sufficiently to inter-class boundaries on the ground for hard classification to be an appropriate basis for mapping from many image data sets. In reality, the boundary between classes will generally run through the area represented by a pixel, with the pixel having a mixed class composition.
Although the potential in land cover mapping was greatly enhanced in the late 1980s with the launch of the fine-to-moderate spatial resolution imagery, SPOT HRV (20m) and Landsat TM/ETM+ (30m) (Baraldi and Parmiggiani, 1990; Treitz et al., 1992) there remains scepticism as to the operational capacity (i.e. robustness, reliability) of data for mapping applications (Donnay et al., 2001). One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. While the new generation fine spatial resolution images from satellite sensors such as IKONOS (4m) and QuickBird (1m) are now available. However, these images are not suitable for large-area studies, since a single image scene is very small (QuickBird 16.5 km x 16.5 km) and therefore costly for large area studies.
The earliest research on techniques for land cover classification from remotely sensed imagery focused on hard classification whether supervised and unsupervised techniques, in which each pixel is allocated to one class. Adams et al., 1985 mentioned many researchers began to realize that for most remotely sensed scenes hard classification is inappropriate. Many pixels in remotely sensed images represent more than one land cover class on the ground. Such mixed pixels occur where the frequency of spatial variation in land cover is greater than or equal to the frequency of sampling afforded by the sensor's spatial resolution (Woodcock and Strahler's, 1987) coarse resolution case). However, a proportion of pixels will be mixed even where the spatial resolution is fine relative to the land cover variation (fine spatial resolution case) because some pixels inevitably straddle boundaries between land cover objects. Recently, super resolution techniques was introduced for land cover mapping where as simple approach involves converting a hard-classified image into the vector data model by replacing class object boundaries with vectors. Generalizing these vectors will produce sub-pixel spatial information on land cover
1.2 Spatial Resolution
Remote sensing imagery derived from aircraft or satellite mounted sensors can provide spectral and spatial information on land cover. However, classifying a remote sensing imagery still remains a challenge that depends on many factors such as complexity of landscape in a study area, selected remote sensing data, and image processing and classification approaches (Settle et.al, 1993). Spatial scale is a key factor in the interpretation of remotely sensed land cover data (Woodcock & Strahler, 1987) and the information obtainable from remotely sensed imagery can vary greatly depending on the spatial variation in the observed land cover and the specific terrain characteristics under consideration.
Always on Time
Marked to Standard
There also exist practical limits to the level of detail that can be identified by each remote sensor and these limits are defined by the resolutions of the remote sensing system. One of the commonest measures of image characteristic used is spatial resolution, which determines the level of spatial detail depicted in an image. The pixel represents the smallest element of a digital image and has, therefore, traditionally represented a limit to the spatial detail obtainable in land cover feature extractions from remotely sensed imagery. Within remote sensing images, a significant proportion of pixels may be of mixed land cover class composition, and the presence of such mixed pixels can affect adversely the performance of image analysis and classification operations (Hansen et al, 1996).
This study evaluates techniques of mapping of the land cover features from coarse spatial resolution satellite sensor image. Initially, a soft classification was applied to a coarse spatial resolution imagery producing an imagery of fraction values representing the thematic composition of image pixels. After assessing the accuracy of the soft classification, methods of mapping at a finer scale than the pixel's spatial resolution or super-resolution mapping techniques were applied to the imagery to predict the actual land cover accuracy. In this way, the land cover features was mapped at a sub-pixel resolution.
1.3 Aims of study
In remote sensing, to obtain highly accurate and precise spatial information may require fine resolution images. But fine spatial resolution imagery is often inappropriate and expensive, particularly if a large area to be mapped. This research is aimed to investigate the potential of super resolution mapping methods for mapping shoreline and land cover.
To evaluate potential of super resolution analysis for accurate shoreline mapping.
One of the issues of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Spatial resolution is the smallest area identifiable on an image as a discrete separate unit. In raster data, it is often expressed as the size of the raster cell. Fine spatial resolution images from satellite sensors such as IKONOS (1 m for panchromatic and 4 m for multispectral) and QuickBird (0.6 m for panchromatic and 2.4 m for multispectral) are available. However, these images are not suitable for large area studies, since a single image scene is very small whereas for 16.5 km for width swath and therefore costly for large area studies. The issue that forms the focus of the present research is that land cover data provided by remote sensing are limited by the spatial resolution of the sensor. Spatial resolution has been the subject of research in remote sensing for many years because it forms a fundamental scale of measurement (Woodcock and Strahler, 1987; Atkinson and Tatem, 2000).
Crucial information barely visible to the human eye is often embedded in a series of coarse spatial resolution images taken of same scene. Super-resolution enables the extraction of this information by reconstructing a single image, at a finer spatial resolution than is present in any of the individual images (Jain, 1989). This is particularly useful in forensic imaging where the extraction of minute details in an image can be solving a crime (Douglas, 2003), and remote sensing imaging where identification of object in fine spatial resolution may gives the most precise and reliable spatial information (Andrew, 2001).
Currently, Charge-Couple-Devices (CCDs) are often used to capture fine spatial resolution images digitally. Although it is adequate for most of today's applications, in near future this will not be acceptable. Technology of CCDs and high precision optics cannot keep up with the demand for images of finer and finer spatial resolution (Muresan and Parks, 2000).
In CCDs, the presence of noise, which is inevitable in any imaging system, prescribes an upper limit on the spatial resolution. Upper limit arises once reducing the area of each CCDs which increases the spatial resolution (more CCDs), the signal strength is correspondingly decreased, while the noise strength remains the same (Brian et al, 1995). This limit on the size of each CCD is roughly 50 Î¼m2, and current CCD technology has almost reached this limit (Mills and Newton, 1996). In addition, cost is another concern in using high precision optics and CCDs. Launching a fine spatial resolution camera into space and on board satellite can be costly, and even risky. It is more cost-efficient to launch a cheaper camera with a coarse spatial resolution into orbit if finer spatial resolution images can be obtained on the ground through image processing (Jain, 1989).
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
The concept of most super resolution techniques is to combine several images from the same scene considered with coarse spatial resolution in order to produce one or several images with a fine spatial resolution. Every coarse spatial resolution image samples the scene as a different projection of the same scene on different sampling lattices, so they have different profiles in the aliased frequency range. Thus, none of the coarse spatial resolution images can be obtained from the other coarse spatial resolution ones because each one contains a certain amount of differential information from the same scene, even though it may be in the aliased frequency range. Most super-resolution approaches can he subdivided in two parts: image registration and image reconstruction. However, the previous two main stages are then followed by a third stage for image restoration for blur removal and noise removal. Figure 2.1 illustrates a scheme for super resolution.
onto a fine resolution grid
Restoration for blur and noise removal
Registration or Motion estimation
Super Resolution Image
Where , Y1, Y2, Y3, â€¦. Yn= Window size of coarse spatial resolution image
Figure 2.1 : Scheme for super-resolution from multi-frame and shifted observations.
The coarse spatial resolution windows Y1, Y2, Y3, â€¦. Yn are input motion estimation or registration module, following which the registered image is interpolated onto a high spatial resolution grid. Post processing of the interpolated image through blurs and noise removal algorithms results in the generation of a super-resolution image.
2.2 Image Registration
Image registration is the process of overlaying two or more images of the same scene taken at different times, different viewpoints, or different sensors. It geometrically aligns two images the reference and sensed images. The present differences between images are introduced due to different imaging conditions. Image registration is a crucial step in all image analysis tasks in which the final information is gained from the combination of various data sources like in image fusion, change detection, and multichannel image restoration.
Essential to the successful super resolution algorithm is the need to find highly accurate point-to-point correspondence or registration between image input sequence. This problem had been addressed by Capel and Zisserman (2003), given two different views of the same scene, for each image point in one view and find the image point in the second view which has the same pre-image, (i.e. corresponds to the same actual point in the scene). Planar projective transformation or planar homography enable estimation of transformation of correspondence points (interest points) using geometric transform in small degrees of freedom. On the other hand, to locate these correspondence points feature-based registration needs to apply. In order to eliminate global illumination changes across, the scene and intensity variations due to camera disturbances, photometric registration must be eliminated.
2.2.1 Image Registration Applications
Image registration applications can be divided into four main groups according to the manner of the image acquisition:
Different viewpoints (multiview analysis)- Images of the same scene are acquired from different viewpoint. The aim is to gain larger a 2D view or a 3D representation of the scanned scene. Examples of applications: Remote sensing-mosaicing of images of the surveyed area.
Different times (multitemporal analysis)- Image of the same scene are acquired at different times, open on regular basis, and possibly under different conditions. The aim is to find and evaluate changes in the scene which appeared between the consecutive images acquisitions. Examples of applications: monitoring of global land usage, landscape planning.
Different sensors (multimodal analysis)- Images of the same scene are acquired by different sensors. The aim is to integrate the information obtained from different source streams to gain more complex and detailed scene representation. Examples of applications: fusion of information from sensors with different characteristics like panchromatic images, offering better spatial resolution, colour/multispectral images with better spectral resolution, or radar images independent of cloud cover and solar illumination.
Scene to model registration - Images of a scene and a model of the scene are registered. The model can be a computer representation of the scene, for instance maps or digital elevation models (DEM) in GIS, another scene with similar content (another patient), 'average' specimen, etc. The aim is to localize the acquired image in the scenel/model and to compare them. Examples of applications: Remote sensing registration of aerial or satellite data into maps or other GIS layers.
The observed coarse spatial resolution images are regarded as degraded observations of a real fine spatial resolution image.
2.3 Image reconstruction technique
In this study, research on super resolution has focussed on reconstruction constraints, and various way of incorporating simple smoothness prior to allow the constrained to be solved.
2.3.1 Image Interpolation
The process of image interpolation aims at estimating intermediate pixels between the known pixel values in the available coarse resolution image. The image interpolation process can he considered as an image synthesis operation. This process is performed on a one dimensional basis row by row and then column by column. If we have a discrete sequence f(xk) of length N as shown in Figure (2.3-a) and this sequence is filtered and down sampled by 2, and get another sequence g(xn) of length N/2 as shown in Figure(2.3-b). The interpolation process aims at estimating a sequence l(xk) of length N as shown in Figure(2.3c), which is as close as possible to the original discrete sequence f(xk).
Figure 2.3: Signal dawn sampling and interpolation. a- Original data sequence. b- Down sampled version of the original data sequence. c- Interpolated data sequence. d- Down sampled version of the interpolated data sequence.
(Source: Chaudhuri, 2001)
This is known as ideal interpolation. From numerical computations point of view, the ideal interpolation. formula is not practical due to the slow rate of decay of the interpolation kernel sinc(x). So, approximations such as nearest-neighbour, linear, B-Spline, Key's (bicubic) and o-Moms are used as alternative basis functions.
2.4 Super Resolution Mapping
The increasing requirement for high spatial and spectral resolution remotely sensed imagery in a wide range of fields such as agriculture, urban planning, habitat management, and especially land cover mapping. To satisfy this requirement, several sensors launched recently provide images with a very high spatial resolution, such as IKONOS with 4-m multispectral (MS) and 1-m panchromatic (Pan) imagery and QuickBird with 2.6-m MS and 0.6-m Pan imagery. However, the spatial detail in such imagery is still limited by the pixel, which represents the smallest element in a remotely sensed image. Conventionally, hard-classification approaches provide thematic maps at the pixel level, in which each pixel is assigned to just one class in the thematic map. In most cases, the nature of the real landscape and the data acquisition process cause many "mixed pixels" in remotely sensed images (Schowengerdt, 1997). If these mixed pixels are assigned to just one class as in hard classification, some information is lost.
Soft-classification approaches predict the proportional cover of each land cover class within each pixel. Several soft-classification approaches exist such as spectral mixture modeling (Settle and Drake, 1993), fuzzy -means classifiers (Bastin, 1997), artificial neural networks (Foody et al, 1997) and nearest neighbour classifiers (Schowengerdt, 1997). Soft classification produces a set of proportion images, and each of these images contains subpixel information on a given class. These images are more informative and appropriate depictions of land cover than those produced by the conventional hard classification. However, the location of the land cover classes in the mixed pixels is still unknown. In other words, the spatial resolution of the thematic map produced by soft classification is not increased relative to that of hard classification.
The idea proposed was to convert soft land cover proportions to land cover classes (that is, at a finer spatial resolution). The solution that is most intuitive is attained by maximizing the spatial correlation or spatial dependence between neighbouring sub-pixels. Spatial dependence is the likelihood that observations close together are more alike than those that are further apart (Matheron, 1965; Goovaerts, 1997). The basic idea was, therefore, to maximize the spatial correlation between neighbouring sub-pixels under the constraint that the original pixel proportions were maintained (Atkinson, 1997). Aplin and Atkinson (2001) developed a technique for converting the output from a per-pixel soft-classification of land cover into a per-parcel hard classification of land cover objects. Land-line vector data from the Ordnance Survey were used to constrain the placement of the soft proportions within each pixel. This requirement for vector data makes the technique redundant for less developed areas of the world and updating the vector database.
2.5 Spatial Variation
The spatial variation observed in remotely sensed imagery is a function of both the property of interest (i.e., the real world) and the sampling framework (i.e., the ensemble of sensor characteristics including the spatial resolution). Researchers have sought to evaluate the effect of spatial resolution on detectable spatial variation as characterised by functions such as the scale variance (Woodcock and Strahler, 1987) and variogram (Curran and Atkinson, 1998). Further, researchers have attempted to find a suitable means of selecting a spatial resolution given knowledge of functions such as the variogram (e.g., Curran and Atkinson, 1998). Just as spatial resolution affects the spatial variation observable in remotely sensed imagery; it also affects land cover classification based on that imagery. For example, researchers realized early on that an increase in spatial resolution may lead to a decrease in classification accuracy (Justice and Townshend, 1981). This paradoxical outcome may be explained by an increase in within-class variance (between ground spatial resolution elements) with an increase in spatial resolution. However, the result relates mainly to hard classification and assumes no post-classification analysis. The more important (and often overlooked) property is that the spatial information increases dramatically with spatial resolution, making spatial resolution the most fundamental limit to quantitative land cover information from remotely sensed imagery.
The new generation of satellite fine spatial resolution and multispectral such as IKONOS and QuickBird imagery has opened a new era of high precise mapping. Their fine spatial resolution and short revisit rate (approximately 3 days) make the images very valuable for land cover mapping. The nominal absolute positioning accuracy (root mean square error, or RMSE) of the 1m and 4m IKONOS Geo products is 25m (Muslim et al 2002). Reports have been made on a number of investigations into the photogrammetric processing of IKONOS imagery and increase of the accuracy of the Rational Function (RF) model and IKONOS products (Li, 1998; Zhou and Li, 2000; Grodecki and Dial, 2001).
Spatial resolution enhancement is usually required in remote sensing, especially with satellite sensor images taken with the aim of recognizing objects with size approaching the limiting spatial resolution scale. The basic premise of most super-resolution techniques is to combine several images from the same scene considered with coarse spatial resolution in order to produce one or several images with a fine spatial resolution. Of course, it can only be assumed that a fine spatial resolution can be obtained from coarse spatial resolution images if they are undersampled and suffer from aliasing. Every coarse spatial resolution image samples the scene as a different projection of the same scene on different sampling lattices, so they have different profiles in the aliased frequency range. Thus, none of the coarse spatial resolution images can be obtained from the other coarse spatial resolution ones because each one contains a certain amount of differential information from the same scene, even though it may be in the aliased frequency range.
Remote sensing has the potential to provide information on land cover features. However, these images are not suitable for large area studies since a single image scene is very small and therefore costly for large area studies. This chapter was introduced super resolution mapping to increase image spatial resolution for accurate shoreline mapping.
SUPER RESOLUTION MAPPING TO DETERMINE SHORELINE POSITION
Coastal zone and shoreline monitoring is an important task in sustainable development and environmental protection. For coastal zone monitoring, shoreline extraction in different times is a fundamental work. Characteristics of water, vegetation and soil make the use of the images that contain visible and infrared bands widely used for coastline mapping Conventionally, photogrammetric technique is employed to map the tide-coordinated shoreline from the aerial photographs that are taken when the water level reaches the desired level. On site survey taken at these water levels are more expensive to obtain than remote sensing imagery. With the development of remote sensing technology, satellites can capture high-resolution imagery with the capability of producing shoreline position.
In recent years, satellite remote sensing data has been used in automatic or semi- automatic shoreline extraction and mapping. Braud and Feng (1998) evaluated threshold level slicing and multi-spectral image classification techniques for detection and delineation of the Louisiana shoreline from 30 m spatial resolution Landsat Thematic Mapper (TM) imagery. They found that thresholding TM Band 5 was the most reliable methodology. Frazier and Page (2000) quantitatively analyzed the classification accuracy of water body detection and delineation from Landsat TM data in the Wagga region in Australia. Their experiments indicated that the density slicing of TM Band 5 achieved an overall accuracy of 96.9 percent, which is as successful as the 6-band maximum likelihood classification. Besides multi-spectral satellite imagery, SAR imagery has also been used to extract shorelines at various geographic locations (Niedermeier, et al. 2000; Schwäbisch et al. 2001). While the very fine spatial resolution sensors (e.g. IKONOS) offers increased spatial resolution, the imagery from such systems is often inappropriate for many users, particularly if a large area is to be mapped (Mumby and Edwards, 2002). Thus, if constrained to use fine-to-moderate spatial resolution (0.10 m) imagery, there is a desire to map the waterline at a subpixel scale. In such situations the aim is, therefore, to derive a map that depicts the feature of interest at a scale finer than the data set from which it was derived, which may be achieved through a super-resolution analysis (Tatem et al. 2001, Verhoeye and De Wulf 2002).
3.2 Test site
The work focused on a 38 km stretch of along a coast off the North West Cape in the north west coast of Western Australia (Figure 3.1). The shoreline was characterized by different beaches such as sandy beaches, muddy and cliff and facing to the Exmouth Gulf in the Indian Ocean. Exmouth Gulf is very shallow, with an average depth of about 10 m and northward facing drowned river valley in northwest Australia inverse estuarine embayment on the northwest shelf of Australia. The vertical tidal range is less than 2 m and varies little between neap and spring tides.
The Exmouth region is exposed to predominantly south to southeasterly winds throughout the year (Bureau of Meteorology, 1988; Lough, 1998). During spring and summer generally moderate (21-30Â km/h) southerly winds dominate, and autumn and winter records show generally lighter (11-20Â km/h) wind speeds with fluctuations between the dominant southeast wind and north to northeast winds. The wind regime is controlled primarily by the interplay of the southeasterly trade wind system and the west coast-generated sea breeze, in conjunction with a local sea breeze developed within the Gulf.
3.3 Data sets
The study used a series of coarse spatial resolution National Oceanic and Atmospheric Administration (NOAA) images over study site to generate a super resolution image. For this study, the shoreline was defined as the position of the boundary between water and land at the time satellite imagery acquisition.
The NOAA series of satellites which each carry the Advanced Very High Resolution Radiometer (AVHRR) sensor. These sensors collect global data on a daily basis for a variety of land, ocean, and atmospheric applications. Specific applications include forest fire detection, vegetation analysis, weather analysis and forecasting, climate research and prediction, global sea surface temperature measurements, ocean dynamics research and search and rescue (CCRS, 1998). 50 NOAA images with different date was used to create super resolution image with different series of coarse spatial resolution.
3.3.1 AVHRR Sensor Characteristics
AVHRR data set is comprised of data collected by the AVHRR sensor and held in the archives of the Geoscience Australia. Carried aboard the National Oceanic and Atmospheric Administration`s (NOAA) Polar Orbiting Environmental Satellite series, the AVHRR sensor is a broad-band, 4- or 5-channel scanning radiometer, sensing in the visible, near-infrared, middle infrared and thermal infrared portions of the electromagnetic spectrum. It provides global on board collection of data over a 2399 km swath. The sensor orbits the Earth 14 times each day from an altitude of 833 km. In this study, NOAA images acquired from Geoscience Australia and NOAA antenna in Alice Springs permits acquisition of day and night-time passes. There are normally about two day-time passes per satellite and two night-time passes per satellite. The sensor parameters as shown Table 3.1. Channel 2 (0.725 - 1.00 Âµm) was extracted from image and used for this study, since using the channel land and water boundaries clearly seen on the image. Table 3.2 shows an AVHRR Spectral Characteristics.
Table 3.1: Spacecraft Parameters
Resolution at nadir
Number of orbits per day
Table 3.2: AVHRR Spectral Characteristics
NOAA-15, 16, 17, 18 (Âµm)
0.58 - 0.68
Daytime cloud and surface mapping
0.725 - 1.00
Night cloud mapping, sea surface temperature
1.58 - 1.64
Snow and ice detection
3.55 - 3.93
Night cloud mapping, sea surface temperature
10.30 - 11.30
Night cloud mapping, sea surface temperature
11.50 - 12.50
Sea surface temperature
3.3.2 Reference Data
Landsat TM data of the North West Cape, Australia was acquired on 24 August 2007 with a spatial resolution 30 m (Figure 3.2). The Landsat path was 115 and WRS Row 075 were geometrically corrected and georeferenced to WGS 84 (world coordinate system).oGeoreferenced imagery is defined imagery which has been corrected to remove geometric errors and transformed to a map projection. Georeferenced image correction can take one of the two forms, systematic and precision. Systematic correction involves using orbital models of the satellite plus telemetry data to find the approximate relationship between the image and the map coordinates. Precision correction uses ground control points to register the image to absolute geographical coordinates. In other words, in a geo-referenced image the pixels and lines are not aligned to the map projection grid geo-referenced image the pixels and lines are not aligned to the map projection grid.
A Landsat 5 TM scene has an instantaneous field of view (IFOV) of 30 m by 30 m (900 square meters) in bands 1 through 5 and band 7, and an IFOV of 120 m by 120 m (14,400 square meters) on the ground in band 6. Only band 4 (0.76 - 0.90 Âµm) in infrared region was used for delineate a shoreline.
In the context of super resolution techniques, it is assumed that several coarse spatial images can be combined into a single fine spatial image to increase the spatial resolution content. The coarse spatial images cannot all be identical and there must be some variation between them, such as translational motion parallel to the image plane (most common), some other type of motion (rotation, moving away or toward the camera), or different viewing angles. In general, super resolution can be broken down into two broad parts: i) registration of the changes between the coarse spatial images, and ii) restoration, or synthesis, of the coarse spatial images into a fine spatial image; this is a conceptual classification only, as sometimes the two steps are performed simultaneously.
In this study, the objective is to generate fine spatial resolution image from multiple coarse resolution images. Fine spatial resolution image has been applied with object identification methods which may construct with respect to image registration and super-resolution construction. All parameters are used iteratively and make object identification secured from error response and been processed in robustness, accurate and precision mode.
3.4.1. Image Registration
Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints or by different sensors. Image registration is a crucial step in all image analysis tasks in which the final information is gained from the combination of various data sources like in image fusion. Image registration consists of following four steps; feature detection, feature matching, transform model estimation and image resampling and transformation.
3.4.2 Geometric Registration
The geometric distortions present in airborne remotely sensed images may be categorized into system-independent and system-dependent distortions. The system independent distortions are caused by the motion of the sensor and by surface relief. Figure 3.3 shows on case of images which are related by a planar projective transformation or so-called planar homography. There are two different situations where (a) images of a plane viewed under arbitrary camera motion and (b) image of an arbitrary 3D scene viewed by a camera rotating about its optic centre or zooming. In both cases, the image points x and x' correspond to a single point X in the world.
Figure 3.3: Two imaging scenarios for which the image-to-image correspondence is captured by a planar homography (Capel and Zimmerman, 2003)
Under a planar homography, points are mapped as: x' = Hx , where x' correspondence point of reference points x in other image and H is a 9 transformations projection. The different of planar homography based on transformation matrix approach below:
or equivalently; x' = Hx 3.2
The equivalent non-homogeneous relationship is;
The scenario depicts in which homography will occurs when a freely moving camera views a very distant scene, such case in airborne remote sensing (Forte and Jones, 1999).
(ii) Photometric Registration
Photometric registration refers to the procedure by which global photometric transformations between images are estimated. This registration for this model which allows for an affine transformation (contrast and brightness) per RGB shows below.
Where, r1,g1,b1 are RGB channel in image 1 while r2, g2, b2 indicate RGB channel in image 2.
Image registration of homography image concludes in Figure 3.4, last two steps iterate until the number of iteration is stable.
Figure 3.4 : Procedure to estimate a homography between two images.
(Capel and Zimmerman, 2003)
In order to derive super resolution image using multiple series of low resolution images, all images need to register. Block bundle adjustment going to be considered as the best estimator to compute all pairs of consecutive frames in the input sequence. Parameters such as translations, rotations, scale, contrast and brightness, feature base registration, RANSAC (RANdom SAmple Consensus) and matching could be done simultaneously in every image pair.
Generative image formation model is the best image formation algorithms which may considered geometric transformation of n images, point spread function which combining effects of optical blur and motion blur, down-sampling operator by a factor S where sampling rate going to be access, scalar illumination parameters and observation noise. This model is generalized as follows:
(source: Capel and Zimmerman, 2003)
f = fine spatial resolution image
gn = nth observed coarse spatial resolution image
Ï„n = geometric transformation of nth image
h = point spread function
sâ†“ = down-sampling operator by a factor S
Î±n , Î²n = scalar illumination parameters
Î·n = observation noise
Transformation Ï„ is assumed to be a homography. The point spread function h is assumed to be linear, spatially invariant. The noise Î· is assumed to be Gaussian with mean zero.
3.5 Hard classification
Mapping from satellite sensor imagery is commonly achieved through hard classification methods, where pixels are classified on the basis of their spectral similarity to a certain pre-defined class. To differentiate between boundary of land and water body, a hard classifier was applied to the coarse spatial resolution (NOAA AVHRR) imagery. The supervised component in this classification methodology refers to the user defined training classes. For each training class the multi-spectral (multi-band) pixel values are extracted and used to define a statistical signature. This signature is a statistical representation of a particular class, which is used by the decision rule to assign labels. It is important that these classes be a homogenous sample of the respective class, but at the same time includes the range of variability for that class. Thus more than one training area is used to represent a particular class. The classified image of NOAA as shown in Figure 3.5a and Figure 3.5b shows a classified of Landsat 5 TM using same training class.
Figure 3.5: (a) 1100 m spatial resolution and (b) 30 m spatial resolution classified imagery
3.6 Super Resolution Method
Super-resolution algorithms attempt to extract the fine resolution image by the limitations of the optical imaging system. This type of problem is an example of an inverse problem, wherein the source of information (fine spatial resolution image) is estimated from the observed data (coarse spatial resolution images). Solving an inverse problem in general requires first constructing a forward model. By far, the most common forward model for the problem of Super-Resolution is linear in form:
Y(t)= M(t)X(t) + V(t)
where Y is the measured data (single or collection of images), M represents the imaging system, X is the unknown high-resolution image or images, V is the random noise inherent to any imaging system, and t represents the time of image acquisition. The underscore notation such as X to indicate a vector.
Historically, the construction of such a cost function has been motivated from either an algebraic or a statistical perspective. Perhaps the cost function most common to both perspectives is the least-squares (LS) cost function, which minimizes the L2 norm of the residual vector,
For the case where the noise V is additive white, zero mean Gaussian, this approach has the interpretation of providing the maximum likelihood estimate of X (Elad and Feuer, 1997). In this study software developed by Farsiu et al (2004) has been modified and used for this work. Figure 6 shows a sample screenshot of our Super-Resolution tool.
Figure 3.6. MDSP Resolution Enhancement Program screenshot.
The waterline was mapped from the super resolution image generated from the series of coarse spatial resolution image. The same training sites were used in all the classifications. As a benchmark, a conventional hard classification was used to predict the waterline from the coarse spatial resolution image. The waterline was fitted to the derived output of this classification by threading it between pixels allocated to the different classes. Figure 3.6(a) shows super resolution image generated from coarse spatial resolution, Figure 3.6(b) shows a boundary between land and water, Figure 3.6(c) vector line of shoreline.
Figure 3.6: Super resolution technique (a) single image (b) hard classification of super resolution image (c) waterline delineation.
The study also repeated the same process using different multiple images. Figure 3.8 shows the output of super resolution image using (a) 3 images (b) 5 images (c) 12 images (d) 15 images (e) 30 images (f) 50 images.
Figure 3.8: Output of super resolution technique (a) 3 images (b) 5 images (c) 12 images (d) 15 images (e) 30 images (f) 50 images.
3.7 Positional Error Analysis
The accuracy of shoreline maps generated at each spatial resolution from application of the hard classification and super resolution method from multiple images were analysed for study area (Figure 3.1). For each extract and coarse-spatial resolution image, the accuracy of the shoreline prediction derived was determined by comparing the to the Landsat 5 TM data for every metre of the shoreline (Table 3.3). The Root Mean Squared Error (RMSE) was used to calculate the square root of the average squared distance of a data point from the fitted line and calculation as follows:
Table 3.3 shows the positional accuracy along the 38 km length of shoreline and Figure 3.9 show a graph of positional accuracy.
Table 3.3: Positional accuracy of the each method.
1 image:14.8 m
3 Images:7.21 m
5 Images: 6.25 m
12 Images: 5.33 m
15 Images: 5.17 m
30 Images: 5.08 m
50 Images: 5.07 m
Figure 3.9: Trends of positional accuracy using different images.
This chapter has given details of the study area and data used throughout this report. Characteristics of the study area are explained to give an understanding of condition attributing to problems related to the coastal area. Data used throughout this thesis were also explained. This chapter also has revealed the potential of using coarse spatial resolution satellite sensor imagery to map the shoreline, even so, fine spatial resolution imagery are still
useful to produce local shoreline maps.
4.0 Plan for next research period
Table 4.1 indicates that research schedule up to 24 months to ensure the study keep on going and is performed well.
Table 4.1: Planning of research activities towards the end of PhD
Super resolution mapping
Sub pixel mapping