The psychology essay below has been submitted to us by a student in order to help you with your studies.

Back to Subject Index

Print Reference This Reddit This


Under FASAL project, area under principal crops like rice, wheat etc, is being estimated at district level in India, through remote sensing data analysis using the sampling approach. Single date digital data and supervised classification approach was adopted for wheat acreage estimation in Karnal district of Haryna (Dadwal and Parihar, 1985) and in Patiala tehsil, Punjab (Kalubarne and Mahey, 1986). Mahey et al., (1993) obtained wheat acreage estimates in Punjab, India using single date acquisition of IRS-1A LISS-1 data during 1988-89 seasons. The methodology consisted of stratified random sampling design, 10 x 10 kms sample segments, a 10% sample fraction and MXL supervised classification. The estimated wheat acreage which is mostly irrigated was in agreement with the estimates of government estimates. Data from Indian Remote Sensing Satellite IRS-1A, 1B have been used successfully to obtain the areas under rice and mustard crops in West Bengal (Panigray et al., 1993). Thus, numerous studies have concluded that remote sensing images can be potentially used for crop area inventory and delineating land use / land cover classes.

We can help you to write your essay!

Professional essay writers

Our writers can help get your essay back on track, take a look at our services to learn more about how we can help.

Essay Writing Service Essay Marking Service Place an Order

Similarly, in India, study on urban transportation system is made easy through remote sensing data analysis using the sample approach. Resulting easy way of transport system for further future.

Sucharita Gopal (1998) worked on two neural networks to classify data and estimate unknown functions, Multi-Layer Perception (MLP) and fuzzy ARTMAP networks. The advantage of NN techniques were brought by her. Road network is important information for this study as the placement of tower should exclude this network. Even though this information is available in maps, recent developed roads might not be available in map and have to be updated by ground based techniques or from satellite data. There are many researchers working on this field to derive road network automatically from satellite data. One such research was carried out by Jun Kumagai (2001) using high resolution data. A pattern group of the roads were examined in search of histogram of the existent roads to see the characteristics of the roads. The part segments that match the histograms of the road pattern groups are extracted as roads from the image. He explored the possibilities of automating the process of the extraction of the road by assuming that the contrast in shade value between the road and surroundings is quite strong. The results were close to one prepared by manual methods.

An approach to achieve automated road network detection from high resolution digital image by mathematical morphology operation is discussed by Shunji Murai and Chunsun Zhang (1998). The approach proposed in their work was to classify the image to find road network region, then morphological trivial opening is adopted to avoid noises. The developed method has been tested on the simulated image with 1 meter resolution. The result showed that mathematical morphological provides an effective tool for automated road network detection. They concluded that a combination of trivial opening and a new concept of granulometry can be effectively used to automatically detect road network with the wider width from high resolution image.

Thierry Geraud (2000) presented a fast method to extract road, network in satellite images. A pre-processing stage relies on mathematical morphology to obtain a connected line which encloses road network. Then, he constructed a graph from this line and a Markovian Random Field is defined to perform road extraction.

The other method is to enhance visual interpretation and thereby manually extracting road network from the low spatial resolution multi-spectral data and high resolution panchromatic data. For this data fusion technique is used. The purpose of fusion process is synthesizing a new multi-spectral image whose bands coincide spectrally as much as possible with those of the original multi-spectral image, and having a spatial resolution comparable to the panchromatic image. Therefore, because of its elevated geometric-thematic information, the merged image results very useful in digital cartography.

Generally any fusion process should not alter the basic radiometry of the multi spectral data. Recent advances in technology have provided data fusion near to multi-spectral data like one demonstrated by Kumar, A.S (2000). The use of cubic spline wavelets for merging high spatial content of panchromatic (PAN) data and spectral contents of multi-spectral (LISS-3) data of IRS-IC, lD spacecraft is discussed by them. It is shown that the method preserves the spatial content of the original PAN data and the spectral content of the LISS-3 better than many of the conventional approaches. It is also suggested that the spatial content of the merged data can further be enhanced by first correcting the PAN data for overall modulation transfer function (MTF) of the sensor. The overall MTF was realized with a piecewise linear model and using the sensor specified MTF at Nyquist frequency.

Lau Wai Leung, Bruce King and Vijay Vohora (2001) worked on the assessment of image fusion by measuring the quantity of enhances information in fused images. Two measuring methods Entropy and Image Noise Index (INI) were employed. Entropy can measure the information content of the images but it has a limitation. It cannot distinguish between information and noise. A solution to this limitation is discussed and new method was proposed _the Image Noise Index (INI) using entropy. This method was applied on three commonly used image fusion techniques i.e. Intensity-Hue-Saturation (HIS), Principle Component Analysis (PCA) and High Pass Filter (HPF) to compare the technique, which gives better results.

This essay is an example of a student's work


This essay has been submitted to us by a student. This is not an example of the work written by our professional essay writers.

Who wrote this essay Request removal Example Essays

Zhou.J (1998) proposed a wavelet transform method to merge the high spectral resolution Landsat TM and high spatial resolution SPOT PAN data. Both the data were decomposed into orthogonal wavelet representation at a given coarser resolution, which consisted of a low frequency approximation and a set of high frequency, spatially oriented detail images. Inverse wavelet transform was performed using the approximation image from each TM band and detail from SPOT PAN. The spectral and spatial features were compared quantitatively with other fusion techniques and wavelet method was found to be good in preserving both.

Pierre Terrettaz (1998) evaluated the results of seven methods in the context of an urban and suburban area. A statistical and visual comparison was made for the overall area and for several zones that represent different land covers types including water, green area, forest, Road Network, urban and semi urban. Mean and standard deviation of the XS bands were compared with the bands obtained after the merging process, the differences and the correlations between the XS and new bands were also calculated.


There are many actual and potential applications for spatial process modeling, and as such, research into the construction of generic process modeling tools and methods with maximum usability and flexibility are preferable. Parks (1993) recognized that the majority of recent spatial modeling research has focused on environmental issues. This appears to have resulted in a bias towards environmental modeling development as presented in the literature. It is argued here that much of the work reported has general application and thus no distinction is made.

There is great potential for modeling software that integrates the benefits of GIS with the process analysis capabilities of modeling software (Abel et al., 1997; Bennett, 1997). Parks (1993) argues that with appropriate planning, modeling and GIS technology may '...cross-fertilize and mutually reinforce each other' (p 31) and that both will be made more robust by 'their linkage and convolution' (p 33).According to Abel et al. (1997), this integration in the past has been technically difficult to achieve. Abel et al. (1997, p 5) argues that many examples of GIS and modeling systems integration 'are typically specific to the component subsystems and to the narrow application focus of the integrated system'.

Ball (1994, p346), defines a good model 'as one that is capable of reproducing the observed changes in a natural system, while producing insight into the dynamics of the system'. This implies that the model has two functions. First, to simulate and predict based on observed processes, and second, provide detailed understanding of the inter-relationships among variables and processes described by the model. Simulation modeling must 'describe, explain, and predict the behavior of the real system' (Hoover et al., 1989, p5) and 'requires that the model indicates the passage of time through the change in one or more

variables as defined by the process description' (Ball, 1994, p347). Ideally, in an integrated geographical modeling system (GMS), as described by Bennett (1997, p337), 'users should be able to visualize ongoing simulations and suspend the simulation process to query intermediate results, investigate key spatial/temporal relations, and even modify the underlying models used to simulate geographical processes'.

The limited development of these models in the past is according to Maxwell et al. (1995, p247) due to 'the large amount of input data required, the difficulty of even large mainframe serial computers in dealing with large spatial arrays, and the conceptual complexity involved in writing, debugging, and calibrating very large simulation programs'. An accepted method of reducing program complexity argues Maxwell et al. (1995, p251) involves '...structuring the model as set of distinct modules with well-defined interfaces'.

Maxwell (1995) suggested that the use of a modular hierarchical approach permits collaborative model research, and simpler design, testing, and implementation. Bennett (1997) and Maxwell et al. (1995) advocate the use of model base management systems to store, manipulate, and retrieve models. Bennett (1997, p339) states that 'by managing models like data, model redundancy is reduced and model consistency is enhanced'.

Maxwell (1996) suggested that one way to develop simpler process model design tools is to construct suitable graphical interfaces for the display and manipulation of structure and dynamics. Albrecht et al. (1997, p158) suggest the use of a 'flow charting environment on top of existing standard GIS that allow the user to develop workflows visually.' In addition Bennett (1997) and Parks (1993) assert the need for artificial intelligence, expert systems, and agents to guide non-expert users in the appropriate handling of these tools and reduce the need for the writing of complex computer code.

Earn money as a Freelance Writer!

We’re looking for qualified experts

As we are always expanding we are looking to grow our team of freelance writers. To find out more about writing with us then please check our freelance writing jobs page.

Freelance Writing Jobs

Narushige Shiode (2001) worked on recent developments in the visualization of urban landscapes. There is a growing interest in the construction of 3D models of urban and built environment for which a host of digital mapping and rendering techniques are being developed. He identified the range of data and techniques adopted for the development of 3D contents and how they could contribute to geographical analysis and planning of urban environment. He also focused on the effectiveness of GIS and its related methods for their capacity to accommodate the demands for visual representation of urban environment as well as the basis for analysis and simulation.

Young-Hoon Kim and Graham Clarke (2000) in their work on 'Integration of spatial models into GIS for health care facility site planning' in the context of spatial modeling methodologies such as maximizing patients accessibility to their hospitals and minimizing accessibility costs, or reducing the uncertainty of patients travel behaviors. They brought out the site selection problem through integrated spatial model by combining spatial interaction and location-allocation modeling methodologies. To improve spatial analysis performance, they have used Avenue scripts to enhance the coupling strategy and data interaction process between the model and the GIS system.

Gary and D. Phillip (1996) had explained ancient settlement strategies using GIS on environmental models. This study applied to one such approach, visibility analysis, to data from the Umayri regional survey. The results suggested that visible communication played an important role in local settlement strategies throughout antiquity.

Sudhakar et al.,(1999) have documented that the accuracy for three classification techniques namely maximum likelihood, contextual and neural network for land use/ land cover with special emphasis on forest type mapping this study in Jaldapara wild life sanctuary area using IRS-1B LISS II data of December, 1994. The area was segregated into ten categories by using all the three classification techniques taking same set of training areas. The classification accuracy was evaluated from the error matrix of same set of training and validation pixels. The analysis showed that the neutral network achieved maximum accuracy of 95 percent, maximum likelihood algorithm with 91.065and contextual classifier with 87.42%. It is concluded that the neural network classifier works better in heterogeneous and contextual in homogeneous forestland, whereas the maximum likelihood is the best in both the condition.

Benz et al., (2001) have concluded that OSCAR enables important development for eCognition to increase its capability for synthetic aperture radar. (SAR) data evaluation yet complex data can be used, additional feature for (polar metric) SAR are integrated despecking can be performed and some dispecking can be performed and some distortion due to SAR geometry are extended by textual analysis and advanced during the remaining project. The developments within OSCAR are first and basis steps on the eCognition way to SAR analysis they prepare for the implements, e.g. the implementation of higher order object features. These features can include special polar metric evaluation and models for the derivation of the biophysical parameters.

Benz and Pottier (2001) have reported that the new approach of fuzzy, object oriented based classification by eCognition, allows to model the relationship between the most of the different scattering classes detectable in alpha-entropy -anisotropy features space and the basic land cover classes. Additional use of geometric and context features reduces ambiguities. Aggregation in semantic groups gives flexible results for various applications. Thus, the high potential of alpha, entropy and anisotropy to extract information from polarimeteric SAR data can be significantly extended further eradication has to proof, if the use of back scatter power reduces the transfer of the used base to other data sets.

Arun kumar (2001) has carried out image classification by segment approach and evaluation of accuracy assistants in this thesis he has made land use / land cover classification, into three levels mainly broad, medium, high level. In the conclusion accuracy assessments results for segmented base nearest neighborhood classification is giving good results than, pixel based supervised classification and visual interpretation. This has been done on IRS 1C/1D LISS III data.

Sanjeev kumar (2002) has carried out an approach to multi resolution segmentation based fuzzy classification and valuation of accuracy assessment. In this thesis he has made land use/land cover classification into three levels namely coarse, medium, and micro level. The classification has been done on IRS-ID, LISS-III, PAN data of 2001. He concluded that micro level classification is very difficult but with object based segmentation is possible by using the contextual information.

Lucieer et al., (2002) explained that object-based approach instead of a pixel-based approach may be helpful in reducing uncertainty. Additionally, interpretation of uncertainty of real world objects may be more intuitive than interpretation of uncertainty of individual pixels.

A straightforward approach to identify fuzzy objects is to apply a (supervised) fuzzy c-means classification, or similar soft classifier. This classifier gives the most likely class for a pixel, and also possibility values of belonging to any other class. It does not, however, take spatial correlation between pixels into account, also known as pattern or texture.

They further explored the use of texture and generation of thematic and spatial object uncertainty in image segmentation to identify fuzzy objects.

The most well known statistical approach toward texture description is the grey level co-occurrence matrix (GLCM) (Haralick et al., 1973). The co-occurrence matrix contains elements that are counts of the number of pixel pairs for specific brightness levels, when separated by some distance and at some relative inclination. Other well-known texture descriptors are Markov random fields (GMRF), Gabor filter, fractals and wavelet models. A comparative study of texture classification is given (Randen et al., 1999). Most approaches to texture analysis quantify texture measures by single values (means, variances, entropy, etc.).

A fuzzy logic approach allowing for partial memberships is a reasonable classification technique. The limited resolution and accuracies as well as the potentially contra dictionary information are coming from different data sources. Thus, an interpretation key using combinations of information -like normalized Digital Surface Model (nDSM), spectral texture, area size of neighboring classes - has to be translated into rules and corresponding membership functions.

The underlying algorithm of e-Cognition joins such neighboring regions that show a degree of fitting - computed with respect to their spectral variance and/or their shape properties, which is smaller than a pre-defined threshold (the so-called scale parameter). Of the extracted segments is performed using a fuzzy logic approach. Region-based approaches are suitable for the analysis of high-resolution remotely sensed data. Baatz et al., (2000) describe the algorithms in detail.

In these methods such as Maximum Likelihood (ML) classification, each sample pixel image data sample is labeled according to its own radiometric properties alone, with no account taken of topological information [this is referred to as the point spread function effect]. Knowledge of neighborhood relationships is a rich source of information that is not exploited in the ML classifiers.

Abkar et al., (2000) in addition, a ML classifier selects the class with the maximum likelihood and it assumes that the other decision. Likelihood's for (other) class memberships are zero or negligible. There is often a radiometric overlap between classes. Some samples in one class may be similar to some (other) samples in another class. Therefore, if we select the maximum likelihood class to which such a sample belongs, it is likely that this may be a wrong. A successful way of reducing mis-classification errors is object based Classification. The ML classification is generally reported to give the highest classification accuracy quantitatively from remotely sensed data, so we adopted it as our per pixel classifier. The ML classification is a pixel-based classification (local) in that it labels a pixel on the basis of it radiometric properties alone, with no account taken of geometric or topological information.

Object based classification integrates girded or raster images from RS, with agricultural field boundaries or vectors from a GIS, resulting in improved land cover classification. The class label of each object is determined based on the Membership of their majority land cover class within the object boundary, assuming that first per pixel classification is applied. For object based classification the existence of Current object boundary data is necessary.

A vast majority of applications rely on Basic image processing concepts developed in the 70s: per-pixel classification of in a multi-dimensional feature space. It is argued that this methodology does not make use of any spatial concepts. Especially in high-resolution images it is very likely that neighboring pixels belong to the same land cover class as the pixel under consideration. The authors argue for classification of homogenous groups of pixels reflecting our objects of interest in reality and Use algorithms to delineate objects based on contextual information in an image on the basis of texture or fractal dimension".

Therefore, segments in an image will never represent meaningful objects at all scales and for any application and we argue for a multi-scale image segmentation approach. Some researchers currently elucidate alternative ways towards the fuzzy delineation of objects or the delineation of fuzzy objects (e.g. Cheng 1999) or a probability-based image segmentation approach (Abkar et al., 2000).

Pavlidis 1982, Springer et al., 1982. Image segmentation is one of the primary steps in image analysis for object identification. The main aim is to recognise homogeneous regions within an image as distinct and belonging to different objects. Segmentation stage does not worry about the identity of the objects. They can be labelled later. The segmentation process can be based on finding the maximum homogeneity in grey levels within the regions identified.

There are several issues related to image segmentation that require detailed review. One of the common problems encountered in image segmentation is choosing a suitable approach for isolating different objects from the background. The segmentation doesn't perform well if the grey levels of different objects are quite similar. Image enhancement techniques seek to improve the visual appearance of an image. They emphasize the salient features of the original image and simplify the task of image segmentation. The type of operator chosen has a direct impact on the quality of the resultant image. It is expected that an ideal operator will enhance the boundary differences between the objects and their background making the image segmentation task easier. Issues related to segmentation involve choosing good segmentation algorithms, measuring their performance, and understanding their impact on the scene analysis system.

eCognition version 3.0 uses a new segmentation algorithm which enables a result not depending on image size. This is an important improvement because often parameters are tested on small subsets. Nevertheless the old algorithm of version 2.1 could be used alternatively in the new software. Altogether eCognition has a high potential due to its multi-scale segmentation and the fuzzy logic based image classification capabilities. Because of the various interfaces to other GIS and remote sensing software systems important user requirements are complied.

Supervised classification is the procedure most often used for quantitative analysis of remote sensing image data (Richards, 1993). It rests upon using suitable algorithms to label the pixels in an image as re presenting particular ground cover types. There are a lot of different methods among those maximum likelihood classification is the most common supervised classification method. The essential practical steps are as follows.

The first step is defining the set of land cover types into which the image is to be segmented. In this con-text most classification methods assume that an image scene can be decomposed into a small number of spectrally separated classes, each of which can be allocated uniquely to one of the defined types of land cover. This corresponds to a model of the Earth's surface that consists of a collection of homogeneous patches of land with precise boundaries. In reality, however, changes in land cover are less abrupt and the definition of different land cover classes is more ambiguous.

For each of the desired set of classes representative's pixels called training data have to be chosen. With these training data the systematical parameters of the classify algorithm can be estimated .In a last step every pixel in the image is classified into one of the chosen and cover types using the decision rule for the trained classifier. In the case of the maximum likelihood classifier the probabilities that a pixel belongs to each of the defined set of possible classes are estimated. The class to which the pixel is finally assigned is that having the highest probability. Class membership probabilities on which the assignment is based are usually disregarded, so that after classification no information on the probabilities is available. So it is not possible to decide, it is a strong membership or a weak membership.

The outcome of the classification is a pixel oriented representation of different land cover classes and can be represented in form of a thematic map. Errors in the definition of land cover type and errors in the assignment of pixels to a particular class result in the classification uncertainty.

Gorte et al., 1998, Classification as described above is commonly called crisp, meaning that a single pixels or from spectral overlap, pose a problem for crisp classifiers. Soft classification approaches, which allow for assignment of more than one class label to an image pixel. Within soft classification, a distinction can be made between fuzzy and sub pixel classification, depending on which of the two ambiguity problems it intends to solve are given below:

In order to handle the uncertainty originating from spectral overlap, fuzz classification identifies the possible classes and assigns membership values. Depending on the underlying classification method, membership values can be, for example, Bayesian probabilities, neural network activation levels or fuzzy set possibilities.

In order to solve the mixed-pixel problem, sub-pixel classification identifies the participating classes and estimates their proportions.

Fuzzy classification methods are preferred in case of a lack of spectral image resolution, while spatial resolution is considered sufficient. An example might be classification of agricultural areas in SPOT imagery to estimate crop acreages. Sub-pixel classification, on the other hand, seems feasible when spatial resolution is the bottleneck, as it is often the case in NOAA (AVHRR) imagery, but also in e.g. geological applications of hyper-spectral imagery, when sparsely occurring minerals are to be identified.

In many cases, the distinction between fuzzy and sub-pixel classification, both from theoretical and practical (i.e. application) point of view is not as clear as in the above discussion. A calibration procedure may be required to translate membership values to proportions, although some techniques, particularly those based on Bayesian probabilistic, assume a direct linear relationship between posterior probabilities and class area proportions.

This is also the case in Probabilistic segmentation, a soft classification method, which in this study has been adapted to vegetation classification in hyper-spectral imagery. Bayesian classification:

P(Ci | x) = P(x | Ci) * P(Ci) / P(x)

The right hand side consists of the following features:

P (x | Ci): the conditional probability to find feature vector x within class Ci . This probability is estimated by analyzing samples of Ci, pixels that the user designated as belonging to class Ci during the training stage of the classification. The collection of estimates of this probability for each feature vector x determines the probably density function of a class.

P(Ci) : the prior probability for class Ci , which is the relative area that Ci covers in the image, or in a designated region of the image (containing the pixel under consideration)

P(x) : unconditional feature density, the probability that feature vector x occurs in the image (or in the above-mentioned region) ñ P(x) is often not considered since it is the same for all classes; when the possibility of an unknown class is not considered, P(x) can be replaced by a normalization factor, since at the end the posterior probabilities (one for each class) add up to 1.

Baatz et al., 1999b, Object oriented classification techniques, well known from radar analysis and GIS based classification of raster images can deal with topology descriptions and spatial object statistics. A proper solution would be to use an advanced segmentation technique, such as, eCognition software to allow advanced (semi) automatic object building and object analysis.

In object based image analysis, the image fusion is a trivial issue as the objects in the panchromatic band can be displayed with any given attribute, including the original pixel values of the multi-spectral bands. The object oriented classification method allows a proper segmentation of the panchromatic data into a set of spatial objects. This makes a pre-selection of the 'objects of interest' possible. Object based image analysis therefore offers the possibility to continue the spectral analysis with 'fused' images.

The whole construction of pixel-objects and the object based image analysis allows an image interpretation, which surpasses traditional spectral analysis. Object-topology and object-texture allows new ways of defining mixed pixel analysis. Also it becomes very interesting to redefine image texture analysis, not only as analysis of variance among neighboring pixels (a filter operation), but also as spatial relationship among image-objects on different levels of resolution (Baatz, 1999).

Andersen 1998, Classification consists of recognizing these features and patterns and their assignment to specified land cover classes. Both spectral and spatial patterns are of interest and the approach can be either manual or digital.

The visual/manual technique uses the skills of the human brain to recognize factors such as shape, size, pattern, shadow, tone/color, texture and site. Some of these factors can be limited by the ground resolution of the satellite. Manual interpretation as described by Borry, et al. (1990) .Roy, et al. emphasize that visual interpretation is a subjective method and hence mapping borders will vary between interpreters. However, for detailed mapping computer-aided techniques are suggested. (Woodcock, et al). Digital classification methods normally use statistical or spectral decision rules to assign the unit considered to different classes. The unit considered can be either the pixel or a region. As opposed to the per-pixel approach a region-based classification first requires segmentation.

The classification algorithms are executed on the images themselves or on partly processed images, for example, a segmented image. However, the classification principles are the same if one deals with segment- or per-pixel classification".

Print Reference This Reddit This

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay

More from UK Essays