Pattern Classification Of Wheat Seeds Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This paper is deliberately demonstrates about the implementation of the Artificial Neural Network in order to examined group comprised kernels belonging to two different varieties of wheat which is Kama and Rosa. Pattern classification using Multi-layer perceptron (MLP) with backpropagation has been applied in this project as a classifier model of neural network. There are 70 instances in every seed types selected in this experiment. The internal kernel structure is detected by using high quality visualization with a soft X-ray technique. The X-ray technique is non-destructive and usually cheaper than other technique. It is also more sophisticated imaging techniques such as scanning microscopy or laser technology. There are two objectives of this mini project, there are to classify the two difference varieties of wheat and to develop an intelligent models using artificial neural network (ANN) technique for pattern classification. The objectives is successfully achieved whereby the number of hidden layer is set to optimum value of 10 as the performance error is reduced while the MSE also reduced when the epoch is increasing and converge at value of 15 epochs as showed in figure 3. The best threshold of 0.5 in figure 4 is chosen to determine the type of wheat seed by using receiver operating characteristic (ROC) according to its true value rate (TPR) and false positive value (FPR) value. The confusion matrix of training in Table III and confusion matrix of validation in Table VII showed the accuracy of 100% with TPR and FPR value of 1.0 and 0 respectively. All three confusion matrix showed high accuracy with the maximum TPR and minimum FPR value. Thus, the developed model is relevant and able to classify the pattern classification of wheat seeds between Kama and Rosa by using artificial neural network technique.

Keywords - Arficial Neural Network; Pattern Classification; Kernal Structure; Multi Layer Perceptron (MLP)

Introduction

Neural networks (NNs) are simple models of the biological nervous systems. An NN can be said to be a data processing system, consisting of a large number of simple, highly interconnected processing elements (artificial neurons), in an architecture inspired by the structure of the cerebral cortex of the brain. The interconnected neural computing elements have the quality to learn and thereby acquire knowledge and make it available for use. NNs have found wide applications in areas such as pattern recognition, image processing, optimization, forecasting, and control systems to name a few.

The Pattern Classification using MLP is being applied to recognize the difference between two seeds. The pattern recognition approaches are based on analysis of statistical parameters computed using image processing tools. The parameters can be compared with the standard pattern parameters to identify the pattern or to make sure the neural network identify a given pattern, it must be trained first. Most of these decision models were validated based on the receiver operating characteristic (ROC) curve, which is a popular tool in imaging research. It conveniently displays the cases accuracy expressed in terms of sensitivity (or true-positive rate) against 1-specificity (or false-positive rate) at all possible threshold values. Important significant in ROC analysis concerns the comparison of two (or more) cases.

In this project, the input to the neural networks is the SEED dataset obtained from the UCI Repository. The Seed dataset contains 2 classes of 70 instances each, where each class refers to a type of seed plant Rosa and Kama. One class is linearly separable from the other. The values are down-loaded and then normalized to obtain a matrix with values ranging from 0 to 1. 2 input and 1 output is provided for neural network to train, test and valid. The objectives of this project are :

To differentiate the type of wheat seeds between Kama and Rosa.

To understand the application of pattern recognition using multi-layer perceptron.

Theoretical Background

Artificial Neural Network (ANN)

The commonest type of ANN consists of three layers of unit as shown in Figure 1. The first layer is "input" units where it is connected to next layer which is the layer of "hidden" units, and lastly connected to layer of "output" units. From that, the number of connections can be deduced as:

Input+ (input*hidden size) + (hidden size + output) (1)

.

A 3 layer ANN model

The input that has been given to the neural network represents the raw information. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. In a multi-layer perceptron networks, units are often called as layer. A supervised learning algorithms is used to train the neural network, every time the supervised learning is trained it will adjust the network's weights and thresholds to make less error in its predictions on the training set. In this algorithm, the network assembles a set of training data that contains examples of inputs together with the corresponding outputs and later learns to infer the relationship between the two. An important issue concerning supervised learning is the problem of error convergence, i.e. the minimization of error between the desired and computed unit values. The aim is to determine a set of weights that minimizes the error.

Performance Indicators

Optimizations of the designed models for best learning coefficients were based on performance indicators such as sensitivity, specificity, diagnostic accuracy and receiver operating characteristic curve. Sensitivity and specificity are commonly used terms that generally describe the accuracy of a test. Sensitivity is a measure of the ratio or percentage of 'true' seed (TP) and a positive cases test result (). It represents the actual percentage of a 'true' seed realized by a positive test result and is also known as true positive rate (TPR), defined as:

Confusion table

ACTUAL CLASS

A+

A-

PREDICTED CLASS

P+

TP

FP

P-

FN

TN

Sensitivity: (2)

Specificity measures the ratio or percentage of 'false' seed (TN) and with a negative cases test result (). It is actually represents the actual percentage of a 'false' seed condition realized by a negative cases test. Specificity is also termed as true negative rate (TNR) and is given as:

Specificity: (3)

The percentage for diagnostic accuracy (DA) refers to the percentage of samples that have been correctly classified or diagnosed, and have output values within the predefined threshold range for the respective output level. It can be derived as:

Accuracy: (4)

Variable serves as counter for the proposed ANN model output, at sample. Is defined as:

(5)

Performances of all the trained ANN models were analyzed by observing the Receiver Operating Curve (ROC) plot. The best threshold level for each plot was selected by calculating the maximize classification accuracy (TPR) and minimize classification errors (FPR).

Methodology

The methodology in this mini project consists of five stages namely literature review, data collection, data identification, data conversion and model designing. Figure 2 shows the flow chart of the methodology for this project.

Start

End

Background of Study

Data Collection

Identify Data

Data Conversion

Model Designing (ANN)

Methodology Flow Chart.

Background of study

A literature review tells the information of the project in a particular subject area and sometime within a certain time period. Collecting an information from others resources such journal, articles, books and blog then summarize the needed and objective of the project. In this project, understanding on the concept of Multi-Layer Perceptron (MLP), structure, characteristic and application are important. MLP are a multiple perceptron's which is integrating into a larger neural network.

Typically MLP consists three layers where are input, output and hidden layers. Where input layer accepts input from the data testing whether output as an output of the MLP to outside and hidden layer responsible to learn relationship of an input or output while training process. An application of MLP can generally be used for two purposes there are pattern classification and function approximation. In this mini project case, where to examined group comprised kernels belonging to two different varieties of wheat which is Kama and Rosa. Pattern classification is used to determine whether a certain data belongs to particular group or another.

Data collection

In this section, the main process is to gather or collect the information and measured the information base on the related studies. The data collection methods usually used to prepare research tools and then these research tools should be used in appropriate setting to collect the information. In this mini project data set is downloaded in the UCI website. At The UCI there are a lot of data and UCI have a collection of databases, domain theories, and data generators. The data set characteristics and its attribute characteristics have to consider in order to match the data can be analyzed using the function approximation.in this case, the data set of wheat seed is compatible to studies.

Data conversion

Data of wheat seeds from UCI is in .txt format. All the data transformed to Microsoft Excel to .xlsx format. Then, the data from Microsoft Excel will train by Artificial Neural Network (ANN).

Model designing

The ANN model is developed to differentiate measurements of geometrical properties of kernels belonging to two different varieties of wheat names Kama and Rosa. For the ANN model, multilayered perceptron (MLP) network with one hidden layer was recommended in diagnosing these cases based on the fact that it has been widely applied by many projectors. Optimization of the trained models was decided using a confusion matrix. Confusion matrix is a matrix for a two-class classifier, contains predicted classifications done by a classification system.

Result and Discussion

Determine the number of hidden layer and percentage division of training, testing and validation.

The number of hidden layer is affecting the output result. Therefore, an optimum number of hidden layer should be determined to produce a good output.

Graph of performance versus hidden layer.

The graph in Figure 3 showed the performance error of output depending on the number of hidden layer. From the graph, the maximum performance error is 0.034 when the number of hidden layer is set to 5. The optimum performance error is 0.001 when the hidden layer is set to 10. Therefore when the hidden layer is set to 10, the error is minimized and the output accuracy is increased. The minimizing in performance error enhance the efficiency of output to differentiate the type of wheat seeds. Thus, the optimum error is when the hidden layer is set to 10. The Table I showed the division between training, testing and validation.

Division of Data In Parameter Percentage

Parameter Percentage (%)

Value of data

Training

70

98

Testing

15

21

Validation

15

21

Total

100

140

Observe the multi layer perceptron performance.

Graph of MSE versus Epochs.

The Figure 4 showed the graph of mean square error (MSE) versus epochs. From the graph, the MSE of training, testing and validation is decreasing when the value of epoch is increasing. The MSE of training, testing and validation is stabil when the value of epoch is 15. Thefore, the increasing of MSE enhance the output performance to differentiate the type of wheat seeds. Thus, the epochs is stabilized 15.

Determine threshold by receiver operating characteristic (ROC).

Most of the decision model will validate based on the ROC curve. Therefore the ROC is used to determine the threshold value as a reference to the model to determine the output result.

Graph of TPR/FPR value versus threshold value.

The Figure 5 showed the graph of true positive rate (TPR) and false positive rate (FPR) versus threshold value. The best threshold value is when the TPR is at the highest and the FPR is at the lowest rate. Through the graph in Figure 5, the selected threshold value is 0.5 when the TPR is 1 and FPR is 0. The threshold value is the reference of the model to differentiate the types of wheat seeds either Kama or Rosa. The output value above the threshold is converted to 1 and it will convert to 0 when the output value below the threshold. Therefore, the best selection of threshold value will enhance the efficiency and reduce the error of the model.

Trainning cunfusion matrix.

Confusion Matrix Of Training

Confusion Table For Trainning Set

ACTUAL CASE

A+

A-

PREDICTED CASE

P+

49

(TN)

0

(FP)

P-

1

(FN)

48

(TP)

Table III showed the confusion matrix of training. The training parameter percentage ratio is 70%. Table IV showed the details of TN,TP,FN and FP. The value of TP is 48, FP is 0, TN is 49 and FN is 1. The TPR and FPR value is calculated to the value is 0.98 and 0 respectively. The accuracy is calculated to 99%. The higher accuracy of the model will enable the model to learn and predict the output value accurately. Therefore, the learning process model is relevant and can be used to recognise the types of wheat seeds with minimum error.

Testing cunfusion matrix.

Confusion Matrix Of Testing

Confusion Table For Testing Set

ACTUAL CASE

A+

A-

PREDICTED CASE

P+

11

(TN)

0

(FP)

P-

0

(FN)

10

(TP)

Table V showed the confusion matrix of testing. The testing parameter percentage ratio is 15%. Table VII showed the details of TN,TP,FN and FP. The value of TP is 10, FP is 0, TN is 11 and FN is 0. The TPR and FPR value is calculated to the value of 1.0 and 0 respectively. The accuracy is calculated to 100%. The higher accuracy of the model showed that the model is able to recognise the output with maximum performance. Therefore, the model is successfully can recognise the type of seeds between kama and rosa.

Validation cunfusion matrix.

Confusion Matrix Of Validation

Confusion Table For Validation Set

ACTUAL CASE

A+

A-

PREDICTED CASE

P+

9

(TN)

0

(FP)

P-

0

(FN)

12

(TP)

Table VII showed the confusion matrix of validation. The validation parameter percentage ratio is 15%. Table VIII showed the details of TN,TP,FN and FP. The value of TP is 12, FP is 0, TN is 9 and FN is 0. The TPR and FPR value is calculated to the value of 1.0 and 0 respectively. The accuracy is calculated to 100%. The higher accuracy of the model showed that the model was validate and able to recognize the output with maximum performance. Therefore, the model is successfully can recognize the type of seeds between Kama and Rosa.

Conclusion

The objectives of this study to differentiate the type of wheat seeds between Kama and Rosa using ANN are successfully achieved. According to the figure 3, the number of hidden layer is set to optimum value of 10 as the performance error is reduced while the MSE also reduced when the epoch is increasing and converge at value of 10epochs. The reducing of error maximizes the efficiency of the model to perform good result. The best threshold of 0.5 is chosen as the reference value to determine the output result by using ROC by the maximum value of TPR and maximum value of FPR. The confusion matrix of training in Table III showed the accuracy of 100% with TPR and FPR value of 1.0 and 0 respectively. Whereas the confusion matrix of validation in Table VII showed the accuracy of 100% with TPR and FPR value of 1.0 and 0 respectively. All three confusion matrix showed high accuracy with the maximum TPR and minimum FPR value. Thus, as a conclusion, the model is relevant and able to differentiate and recognize the pattern classification of wheat seeds between Kama and Rosa by using artificial neural network.

Acknowledgment

First and foremost, we would like to be thankful to Allah that gives us this opportunity to complete this Artificial Neural Network (ANN) mini project. A special thanks to our beloved parent that always give us full support either in motivation or confidences within the completion of this mini project. We would like to thank to our lecturer, Mr. Ahmad Ihsan bin Mohd Yassin for the valuable guidance and advice. He inspired us greatly to work in this project. His willingness to motivate us contributed tremendously to this project. We also would like to thank him for showing us some example that related to the topic of our project. The guidance and support received from all the members who contributed and who are contributing to this project, was vital for the success of the project. We are grateful for their constant support, guidance and help.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.