An Overview Of Classifiers And Accuracy Assessment Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

A classification accuracy assessment usually includes three essential mechanism: sampling design, response design, and estimation and analysis procedures (Stehman and Czaplewski 1998). Selection of a suitable sampling strategy is a critical step (Congalton 1991). The major components of a sampling plan include sampling unit (pixels or polygons), sampling design, and sample size (Muller et al. 1998).

Possible sampling designs include random, stratified random, systematic, double, and cluster sampling. A full description of sampling techniques explain by Congalton and Green (1999).

The error matrix approach is the one most widely used in accuracy assessment (Foody 2002b). In order to correctly generate an error matrix, one must consider the following factors: firstly e data collection, secondly action scheme, thirdly scheme, fourthly autocorrelation, and finally size and sample unit (Congalton and Plourde 2002). After generation of an error matrix, other important accuracy assessment elements, such as overall accuracy, omission error, commission error, and kappa coefficient, can be defined the meanings and provided calculation methods for these elements (Congalton and Plourde 2002, Foody 2002b, 2004a). For the meantime, many authors, such as Smits et al. (1999), and Foody (2002b), have conduct reviews on classification accuracy assessment. They have assessed the status of accuracy assessment of image classification, and discussed appropriate issues. Congalton and Green (1999) systematically reviewed the concept of fundamental accuracy assessment and some higher topics concerned in fuzzy-logic and multilayer assessments, and explained principles and useful concern in scheming and conduct accuracy assessment of remote-sensing data. The Kappa coefficient is a measure of overall statistical agreement of an error matrix, which takes non-diagonal basics into account. Kappa analysis is recognized as a powerful method for analysing a single error matrix and for comparing the differences between various error matrices (Congalton 1991, Foody 2004a) it will used in this study.

Modified kappa coefficient and tau coefficient have been developed as enhanced measures of classification accuracy (Foody 1992, Ma and Redmond 1995). Furthermore, accuracy assessment based on a normalized error matrix has been conducted, which is regard as a better presentation than the conventional error matrix Stehman (2004).

The traditional error matrix approach is not appropriate for evaluating these soft classification results accordingly, many new measures, such as conditional entropy and mutual information Maselli et al. (1994), fuzzy-set approaches Woodcock and Gopal (2000), symmetric index of information closeness (Foody 1996), Renyi overview entropy function (Ricotta and Avena 2002), and parametric overview of Morisita's index (Ricotta 2004) have been developed.

In summary, the error matrix approach is the mainly general accuracy assessment approach for categorical classes. Uncertainty and confidence analysis of classification results has gained some attention recently (McIver and Friedl 2001, Liu et al. 2004), and spatially explicit data on mapping confidence are regard as an important aspect in efficiently employing classification results for decision making (McIver and Friedl 2001, Liu et al. 2004).

The widely accepted method for assessing the accuracy of thematic maps from remotely sensed data has been the error matrix, or confusion matrix (Congalton and Green, 1999).

11.1 Evaluation of classifiers - accuracy assessment

Smith et al. (2002) have shown that the classification accuracy significantly decreases with an increasing heterogeneity of the landscape and in contrast the accuracy is improved with increasing area sizes. The divergence between the map and location information is a classification error (Foody, 2002). Different to the overall accuracy a classification method can be also assessed in terms of reproducibility, strength to noise, dependency on the training pattern or computational advantages (DeFries and Chan, 2000). However, the main principle - which is usually used in each evaluation, is the classification accuracy. With the classification error - there are many errors exists (Foody, 2002). Spatial distortions, e.g., due to geometrical correction and resampling of the image data, can be a main source of misclassifications (Muller et al., 1998).

Errors in the ground truth and reference data are another error-source that affects the classification accuracy (Foody, 2002). Man-based ground truth movement might be biased and dependent on the field analysts. Additionally the field mapping Classifier Algorithms and Concepts plan is frequently based on natural features and objects, mainly in context of regions that are subject by agricultural land use.

In generally the accuracy assessment is based on the accuracy or confusion matrix, which compares ground truth data with the equal classification for a given set of validation samples (Congaltion and Green, 1999; Foody, 2002). The accuracy matrix enables the source of the most common evaluation criterions firstly overall accuracy, secondly producer accuracy, finally user accuracy. A detailed overview is given by Foody (2002) and Congaltion and Green (1999).

There is no require of other accuracy measures, foremost of which are the chance-corrected measures such as kappa, and map users and producers need to be completely aware of the utility and limits of these measures, Liu et al. (2007). The error matrix approach is readily adapted to summarize and make clear accuracy of change (Biging et al.1998), the main modification being that the "classes" are all possible types of change as well as no change for each class. Van Oort (2007) provides some useful additional interpretation tools for understanding the change accuracy error matrix. Gopal and Woodcock (1994) introduced ground-breaking methods for quantifying accuracy when fuzzy classified reference data are compared to hard classified categorical map data. Pontius and Cheuk (2006) further extended the error matrix concept to give descriptive accuracy measures when both the map and reference classifications are fuzzy. Ji and Gallo (2006) reviewed many of the descriptive measures for describing accuracy when the map output is quantitative. The basic descriptive measures for describing accuracy of quantitative outputs are mean error, mean absolute error, root mean square error, and correlation.

For the evaluation of soft classifications in general, various

suggestions have been made (Binaghi et al., 1999; Congalton,

1991; Foody, 1995; Gopal & Woodcock, 1994; Green &

Congalton, 2004; Lewis & Brown, 2001; Pontius & Cheuk,

2006; Townsend, 2000), among which, the fuzzy error matrix

(Binaghi et al., 1999) is one of the most appealing approaches,

as it represents a generalization (grounded on the fuzzy set

theory) of the traditional confusion matrix. In spite of its

sounding theoretical basis, the fuzzy error matrix is not

generally adopted as a standard accuracy report and statement

for soft classifications. Some reasons for this have been

highlighted as counterintuitive characteristics (Pontius &

Cheuk, 2006). Specifically, for a cross-comparison to be

consistent with the traditional confusion matrix, it is desirable

that the cross-comparison results in a diagonal matrix when a

map is compared to itself, and that its marginal totals match the

total of membership grades. More importantly, a crosscomparison

should convey readily interpretable information

on the confusion among the classes. To date, the applicability of

the fuzzy error matrix has been mainly concentrated on

generating accuracy indices such as the overall accuracy, the

user and producer accuracy, the kappa, and the conditional

kappa coefficients (e.g., Binaghi et al., 1999; Okeke & Karnieli,

2006; Shabanov et al., 2005).

Recently, a composite operator was proposed for computing a crosscomparison

matrix that exhibits some of the aforementioned

desirable characteristics (Pontius & Cheuk, 2006). Pontius and

Cheuk (2006) showed how the composite operator can be used

for a multi-resolution assessment of raster maps and compared it

with other alternatives, including the traditional hardening of

pixels, the minimum operator (Binaghi et al., 1999), and the

product operator (Lewis & Brown, 2001). This composite

operator was also suggested as a viable tool for the sub-pixel

comparison of maps (Pontius & Connors, 2006). Although

several desirable properties are found in the composite operator,

its utility has been only demonstrated on the use of traditional

accuracy indices (Kuzera & Pontius, 2004; Pontius & Cheuk,

2006; Pontius & Connors, 2006), and neither has the use of the

off-diagonal cells been demonstrated, nor is their interpretation