# Character Recognition And The Neural Network Biology Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Character recognition is an area of pattern recognition in which various researches are going on .Many work on character recognition on several languages such as Tamil, Japanese, Arabic, Chinese, Hindi and Farsi have been done but still handwritten English character and word recognition using Neural networks is an open problem. In India, English as well as Hindi languages are used in offices.

Neural network plays an important role in pattern recognition. Neural network recognize handwritten characters. It uses back-propagation method to train and test handwritten characters.

In this work, compass operator perform operation on the collected samples of the character .The compass operator brighten each edge by moving in anticlockwise direction. This operator form a brighten edge of each character by changing its gradient values. Next stage is feature extraction through Fourier Descriptor. Different writing style have different angle and are of different size that create discrepancy and this problem is solved by Fourier descriptor. The Fourier descriptor forms a skeleton of a character. The Style of character that lies inside the skeleton is than extracted. Experimental result of this dissertation reveals that Fourier Descriptor feature extraction technique provides accuracy around 96% and also requires less classification and training time.

## Motivation

Many works of character recognition on several languages such as Tamil, Arabic, Chinese, Hindi and Farsi have been done but still there is a problem in recognizing different language characters. We know that every country have their own language but English is a global language. Therefore, it has been taken as big challenge to develop method for recognition of English character with high accuracy and less training time.

## Objective

The objective of this dissertation is described as:

Compass operator used to brighten each edge of character image. The compass operator works for black and white images as well as for colored images. This operator increases the character recognition accuracy and less training time and classification time.

Fourier descriptor form skeleton of an image character. The features are extracted from closed boundary trace. There are many features that can be used to describe closed boundary trace, the Fourier coefficients was chosen so that they are invariant with respect to translation, rotation and size of similar characters.

Performance of compass operator with Fourier descriptor enhances character recognition accuracy as well as reduces training time and classification time.

## Contribution:

## Compass operator with Fourier descriptor

In this approach, a 30X30 black and white image in the binary image format has been taken as input. The pixel covering the shape of character having values 1 and rest have values 0. The compass operator change the pixel values converting the shape of character 1 by 1,2,3,4……………..8, and rest are not change. In other words, compass operator change the gradient values of each edge that are not brighten. The output of the compass operator works as input for Fourier descriptor. Fourier descriptor forms a boundary around input image character. This combined approach is not only for single language but for all languages that are trained by neural network.

Scanned character are then stored in separate array having size 30X30. Hence by combining recognized individual characters we can create a new recognized word.

## CHAPTER 2

## NEURAL NETWORK

## Target

Neural networks are composed of simple elements operating in parallel. These elements are inspired by biological nervous systems. The connections between elements largely determine the network function. We can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements. Typically, neural networks are adjusted, or trained, so that a particular input leads to a specific target output. The network is adjusted, based on a comparison of the output and the target, until the network output matches the target. Typically, many such input/target pairs are needed to train a network. Neural networks have been trained to perform complex functions in various fields, including pattern recognition, identification, classification, speech, vision, and control systems. Neural networks can also be trained to solve problems that are difficult for conventional computers or human beings.

## Input

## Output

## Neural Network including connections (called weights)

## between neurons

## Adjust weights

## Figure 2.1 Basic concept of Neural Network

A neural network has a parallel-distributed architecture with a large number of nodes and connections A node represented as a neuron and arrow represented as direction of signal flow. Each node is connected to one another and these nodes are associated with weight.

## 2.1 Artificial Neuron

An Artificial neuron models are based on biological characteristics. An Artificial neuron receives a set of inputs and then each input is multiplied by a weight, this is analogous to synaptic strength. The sum of all weighted inputs determines the degree called the activation level. In neural network , connection weights are referred to as LTM (long term memory) and activations are referred to as STM (short term memory) . Each input Xi is modulated by weight Wi and total input is expressed as

## Figure:2.1 Artificial Neuron

NET= Σ Xi.Wi

Where X=[X1,X2,……….,Xn] and W = [W1,W2,………,Wn].

## 2.2 Activation function

The basic operation of an artificial neuron involves summing its weighted input signal and applying an output or activation function. Typically the same activation function is used for all neurons in any particular layer of a neural network, although this is not required. In most cases a nonlinear activation function is used. The advantage of multilayer network requires nonlinear functions in comparison to single layer network.

## 2.3 Identity function :

F(x) = x for all x

Single layer network often use a step function to convert the network , to an output unit that is a binary (1 or 0) or bipolar (1 or -1) signal. The binary step function is also known as the threshold function or Heaviside function.

## Figure 2.3 Identity function

## 2.4 Binary step function (with threshold θ)

F(x)= 1 if x ≥ θ

if x < θ

Sigmoid function (S - shaped curves) is useful activation functions. The logistic function and the hyperbolic tangent functions are most common. They are especially advantageous for use in neural network trained by backpropagation because the simple relationship between the value of the function at a points and the value of the function at a point and the value of the derivative at that points reduces the computational burden during training.

The logistic function , sigmoid function with range from 0 to 1 .It is often used as activation function for neural network in which desired output values are in the interval between 0 and 1. This is known as binary sigmoid or logistic sigmoid.

F(x) 1

1 2 3 4 x

## Figure 2.5 Binary sigmoid function

## 2.6 Signum function

This is also known as the Quantizer function. This function is a continuous function that varies gradually between -1 and +1 . The function F is defined as

F(x) = +1 x > θ

-1 x ≤ θ

Output F(x)

Input

+1

0 x

-1

θ Threshold

## Figure 2.6 Signum Function

## 2.1.7 Hyperbolic tangent function

The function is given by

F(x) = tanh (x)

It can produce negative output values.

## 2.2Neural Network Architecture

The neural network refers to its framework as well as interconnected scheme. The framework is often specified by the number of layers and the number of nodes per layer. Inputs to the network are presented to the input layer. Input units do not process information .They simply distribute information to other units. Outputs are generated as signals of the output layer. An input vector applied to the input layer and generates an output signal vector across the output layer. Now, these signals may pass through one or more intermediate or hidden layers .This transforms the signals depending upon the neuron signal function.

## 2.2.1 Single layer neural network

The basic architecture of the simplest possible neural networks that perform pattern classification. It consist of layer of input units and a single output unit. In single layer neural network a bias acts exactly as a weight on a connection from a unit whose activation is always 1. As increasing bias as well as increases the network input to the unit. If bias is taken place then the activation function is taken as

F(net) = 1 if net ≥ 0

-1 if net < 0

Where net = b + Σ xiwi

Now consider the separation of the input space into region where the response to the network is positive and the regions where the response is negative. The boundary between the input value x1 and x2 for which the network gives the positive response and the value with the negative response by separated line

b + x1w1 + x2w2 = 0

x2 = (-w1/w2)x1 - (b/w2)

The requirement for a positive response from the output unit is that the network input it receives

b +x1w1 +x2w2

greater than 0.During training values of w1, w2 and b are determine so the network give correct response for the training data.

## b

Input Unit Output Unit

## Figure2.2.1 Single layer neural network

## 2.2.2Algorithm for Single layer neural network

Step 0. Initialize all weights:

wi = 0 (i =1 to n)

Step 1. For each input training vector and target output pair, s : t, do steps 2-4.

Step 2. Set activations for input units:

xi = si (i =1 to n)

Step 3. Set activations for output units:

y = t

Step 4. Adjust the weights for

wi(new) = wi(old) + xiy (i =1 to n)

Adjust the bias :

b(new) = b(old) + y

The bias is adjusted exactly like a weight from a unit whose output is always 1.

## 2.3 Multilayer Neural Network

The multilayer architecture processing an input and an output layer also have one or more intermediate layers called hidden layer. The hidden layer performing computation before directing the input to the output layer.The input layer neurons are connected to the hidden layer neuron and each connected link having some weight. This network is used in dissertation work for training data.

Input layer Hidden layer Output layer

## Figure 2.3 Multilayer Neural Network

## CHAPTER 3

## Backpropagation Neural Network

## 3.1 Introduction

A multilayer network is a network with one or more layers of nodes between the input units and output units. There is weight between two adjacent levels of units. Multilayer networks can solve more complicated problems than single layer network but training are more difficult. Training can be more successful in multilayer than single layer.

An effective general method of training a multilayer neural network is a backpropagation neural network. A backpropagation network can be used to solve problems in many areas. The training of a network by backpropagation involves three stages:

The feedforward of the input training pattern,

The calculation and backpropagation of the associated error,

The adjustment of the weights.

After training of the network involves computation of the feedforward phase. If training is slow then only it produces its output very rapidly. Numerous variations of backpropagation have been developed to improve the speed up the training process.

## 3.2 Training algorithm for backpropagation

Step 0 Initalize weights.

Step 1 While stopping condition is false , do step 2-9.

Step 2 For each training pair, do step 3-8.

Feedforward:

Step 3 Each input unit (Xi, i= 1,…..,n)receives input signal xi and broadcasts this signal to all units in the Hidden layer .

Step 4 Each hidden unit (Zj, j=1,….,p)sum its weighted input signals,

z_inj = voj + Σ xivij,

Applies its activation function to compute its output signal,

zj = f(z_inj),

And sends this signal to all units in the layer above.

Step 5 Eack output unit (Yk, k = 1,….,m) sum its weighted input signals,

y_inj = wok + Σzjwjk,

Applies its activation function to compute its output signal,

yk= f(y_ink),

Backpropagation of error:

Step 6 Eack output unit (Yk, k = 1,….,m) receives a target pattern corresponding to the input training pattern, computes its error information term,

δk =(tk -yk) f' (y_ink),

Calculates its weight correction term

âˆ†wjk = αδkzj

Calculates its bais correction term

âˆ†wok = αδk

And sends δk to units in the layer below

Step 7 Each hidden unit (Zj, j = 1,………,p) sums its delta inputs

δ_inj = Σ δkwjk

Multiplies by the derivative of its activation function to calculate its error information term,

δj = δ_inj f'(z_inj)

calculates its weight correction term

âˆ†vij = αδjxj

And calculates its bais correctionterm

âˆ†voj = αδj

Update weights and biases:

Step 8 Each output unit (Yk,k= 1,…..,m) updates its bias and weight (j = 0,…..,p)

wjk(new) = wjk(old) + âˆ†wjk

Each hidden unit (Zj, j = 1,……,p) updates its bias and weights ( i = 0,……,n)

vij(new) = vij(old) + âˆ†vij

Step 9 Test stopping condition.

## 3.3 Multilayer Neural Network Architecture in MATLAB

## 3.3.1Neuron Model (logsig, tansig, purelin)

Each input is weighted with an appropriate w. The sum of the weighted inputs and the bias forms the input to the transfer function f. Neurons can use any differentiable transfer function f to generate their output.

## Figure 3.3.1 Neural network model

## 3.3.2 Log-Sigmoid Transfer Function

Multilayer networks often use the log-sigmoid transfer function logsig.

## Figure 3.3.2 Log-sigmoid transfer function

## 3.3.3 Tan-Sigmoid Transfer Function

The function logsig generates outputs between 0 and 1 as the neuron's network input goes from negative to positive infinity.

## Figure 3.3.3Tan-sigmoid transfer function

Multilayer networks can use the tan-sigmoid transfer function tansig.

## 3.3.4 Linear Transfer Function

Sigmoid output neurons are often used for pattern recognition problems, while linear output neurons are used for function fitting problems. The linear transfer function purelin is

## Figure 3.3.4Purelin transfer function

## Feedforward Network

A single-layer network of S logsig neurons having R inputs is shown below in full detail on the left and with a layer diagram on the right. Feedforward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear relationships between input and output vectors. The linear output layer is most often used for function fitting (or nonlinear regression) problems. On the other hand, if you want to constrain the outputs of a network (such as between 0 and 1), then the output layer should use a sigmoid transfer function (such as logsig). This is the case when the network is used for pattern recognition problems (in which a decision is being made by the network).

## Input

## Layer of logsig Neurons

## Input

## Layer of logsig Neurons

## a= logsig(Wp+b) a= logsig(Wp+b)

## Figure 3.3.4.1a Single layer Feedforward network Figure 3.3.4.1bLayered Feedforward network

## Output Layer

## Hidden Layer

## Input

a1 = tansig (IW1,1p1 +b1)

## Figure 3.3.4.2 Multilayer Feedforward network

## CHAPTER 4

## Feature Extraction

## Introduction

Input variables combined together to form a smaller number of new variables is referred as feature. The process of generating feature is known as feature extraction .Feature Extraction is used in handwritten character recognition in order to influence the recognition performance .When the processed input data from a character image is too large than the input data will be transformed into a reduced set of features.

In this dissertation report, various feature extraction techniques have been studied for example conventional feature extraction, Gradient feature extraction, Directional distance feature extraction etc. A new feature extraction technique that has been developed by the researcher in this dissertation is Compass operator with Fourier descriptor feature extraction. This feature extraction technique has been implemented for English character recognition that provides high recognition accuracy and reduced training time.

## 4.1Conventional Feature Extraction

In the conventional or Global pixel method a region is constituted by the character of an image. An image is represented by an array of pixels, which carries an associated value that is X. The value of X ranges from 0 for a completely white pixel and 1 for a completely black pixel. It is necessary to store the pixel value for individual character. The major drawback of conventional method is that when same character written by different person is different, the same character written by same person on different time is also different, so particular character cannot be recognized. It also requires large memory space to store pixel values.

## 4.2Gradient Feature Extraction

The gradient operator, named Compass operator is used in this dissertation to calculate the gradient values. The compass edge detector is an appropriate way to estimate the magnitude and orientation of an edge. Although differential gradient edge detection needs a rather time-consuming calculation to estimate the orientation from the magnitudes in the x- and y-directions, the compass edge detection obtains the orientation directly from the kernel with the maximum response. This method is used to rotate the Prewitt's and Sobel's mask in all the possible directions. This mask is rotated in anticlock direction.

## Figure 4.1. Compass masks

This operator is known as the compass operator and is very useful for detecting weak edges and gives equal brightness all over the edges. This is the major advantage over Sobel and Prewitt operator. The compass operator works on black and white image as well as on colored image. The black and white image show blank output on graphics window but it gradually change its gradient values.

M(x,y)=[a(1)*I(x-1,y-1)+a(2)*I(x-1,y)+a(3)*I(x-1,y+1)+ a(4)* I(x,y-1)+a(5)*I(x,y)+ a(6)*I(x,y+1)+a(7)* I(x+1,y-1)+a(8)* I (x+1,y)+a(9)*I (x+1, y+1)];

N(x,y)=[b(1)*I(x-1,y-1)+b(2)*I(x-1,y)+b(3)*I(x-1,y+1)+ b(4)* I(x,y-1)+b(5)*I(x,y)+b(6)* I(x,y+1)+b(7) *I(x+1,y-1) +b(8)* I(x+1,y) +b(9)* I(x+1 ,y+1)];

O(x,y)=[c(1)*I(x-1,y-1)+c(2)*I(x-1,y)+c(3)*I(x-1,y+1)+ c(4)* I(x,y-1)+ c(5)*I(x,y)+c(6)*I(x,y+1)+c(7)* I(x+1,y-1)+ c(8)* I (x+1,y)+c(9)* I(x+1, y+1)];

P(x,y)=[d(1)*I(x-1,y-1)+d(2)*I(x-1,y)+d(3)*I(x-1,y+1)+ d(4)* I(x,y-1)+d(5)*I(x,y)+d(6)*I(x,y+1) +d(7)* I(x+1,y-1) +d(8)* I (x+1,y)+ d(9)* I(x+1,y+1)];

Q(x,y)=[e(1)*I(x-1,y-1)+e(2)*I(x-1,y)+e(3)*I(x-1,y+1)+e(4)*I(x,y-1)+e(5)*I(x,y)+e(6)*I(x,y+1)+e(7)* I(x+1,y-1)+e(8)*I (x+1,y) +e(9)*I(x+1,y+1)];

R(x,y)=[f(1)*I(x-1,y-1)+f(2)*I(x-1,y)+f(3)*I(x-1,y+1)+f(4)*I(x,y-1)+f(5)*I(x,y)+f(6)* I(x,y+1)+f(7) *I(x+1,y-1)+f(8) *I (x+1,y)+f(9)* I(x+1, y+1)];

S(x,y)=[g(1)*I(x-1,y-1)+g(2)*I(x-1,y)+g(3)*I(x-1,y+1)+ g(4)*I(x,y-1)+g(5)*I(x,y)+g(6)*I(x,y+1)+g(7)* I(x+1,y-1)+ g(8) *I(x+1,y)+ g(9)* I (x+1,y+1)];

T(x,y)=[h(1)*I(x-1,y-1)+h(2)*I(x-1,y)+h(3)*I(x-1,y+1) +h(4)* I(x,y-1)+ h(5)*I(x,y)+h(6)*I(x,y+1)+h(7)* I(x+1,y-1)+h(8)*I(x+1,y) +h(9) *I(x+1,y+1)];

W=max(max(max(max(max(max(max(M,N),O),P),Q),R),S),T);

## Fig.4.2 Experimental result of colored image

## 4.3 Fourier Descriptor Feature Extraction

Fourier Descriptor calculates the K-points digital boundary in the xy- plane. Starting at an arbitrary point (x0,y0) , coordinate points (x0,y0) , (x1,y1) ,(x2,y2) ,……,(xK-1, yK-1) are encountered in traversing the boundary in counterclockwise direction. These coordinates can be expressed in the form x(k) = xk and y(k) = yk. The boundary can be represented as the sequence of coordinates s(k) = [x(k),y(k)] , for k= 0, 1 ,2, ……, K-1. Each coordinate point can be treated as complex number so that

s(k) = x(k) + jy(k)……………………………..(1)

where k =0,1,2,………..,K-1. The x-axis is treated as the real axis and the y-axis as the imaginary axis of a sequence of complex numbers. Although the interpretation of the sequence was recast, the nature of the boundary itself was not changed. This represents one great advantage that is to reduce a 2-D to a 1 -D problem.

The discrete Fourier transformation (DFT) of s(k) is

a(u) =1/K( Σ s(k)e-j2πuk/K )……………………….(2)

where u= 0,1,2,…………….K-1.The complex coefficients a(u) are called the Fourier descriptors of the boundary. The inverse Fourier transform of the coefficient restores s(k)

s(k) = Σ a(u)e-j2πuk/K ……………………….(3)

where k = 0,1,2,…………,K-1.Suppose instead of all the Fourier coefficients, only first P coefficients are used. This is equivalent to setting a(u) =0 for u > P-1 in equation (3).The result is the following approximation to s(k):

P-1

s€œ(k) = Σ a(u) e-j2πuk/K ……………………….(4) u=0

where k = 0,1,2,…………,K-1. Although only P terms are used to obtain each component of s€œ(k), k still ranges from 0 to K-1. The same number of points exits in the approximate boundary, but not as many terms are used in the reconstruction of each point.

Original (K=64) P = 2 P = 8 P = 64

## Figure 4.3 Reconstruction of boundary using Fourier coefficients

This figure shows a square boundary consisting of K = 64 points and the results of using equation (4) to reconstruct this boundary for various values of P. When P=61, the curves begin to straighten, which leads to an almost exact replica of the original one additional coefficient. Thus a few low-order coefficients are able to capture gross shape, but many more high order terms are required to define accurate shape features such as corner and straight lines. This result is not unexpected in view of the role played by low and high frequency components in defining the shape of a region.

## .

## CHAPTER 5

## Implementation of Feature Extraction Techniques

This chapter describes the various feature extraction techniques namely Compass Gradient Feature Extraction, Fourier Descriptor Feature Extraction and Compass Gradient Feature Extraction with Fourier Descriptor for handwritten English character recognition.

## 5.1 Compass Gradient Feature Extraction

## Fig.5.1. Flowchart

## Procedure

## 5.1.1Perform The Normalization Process On Scanned Character

I am planning to scan each character at 300 pixels per inch using Scanner HP-Scan Jet 11. The scanned character will be converted into 4096 (64x64) binary pixels. The skeletonization process will be used to binary pixel image and the extra pixels which were not belonging to the backbone of the character, will be deleted and the broad strokes will be reduced to thin lines. Skeletonization is illustrated in following Figure 2.

## A Figure 5.1.1 Skeletonization of a English character

There are lot of variations in handwritings of different persons .After skeletonization process, we need normalization process, that normalized the character into 30X30 pixel character and shifted to the top left corner of pixel window.

## 5.1.2Perform Binarization On Capture Image Of 30 X 30 Pixel.

After skeletonization and normalization processes were used for each character, binarizing the normalized image into 30 X 30 matrixes. The black pixels contain value 1 and white pixels contain value 0.

## 5.1.3 Implementation of Compass Operator

Implementation of Compass operator on 30 x 30 matrix for gradient calculation. The gradient values will be decomposed into a clockwise direction and getting 9 discrete gradient values.

## Fig.5.1.2.Experimental result of colored image

## 5.1.4 Training data on Neural Network

Now 900 x 1 column matrixes is input to the compass operator. The 900 x 1 column matrixes are given as input to the feed forward neural network. The command used for feed forward network named net = train (net, I, g) where I is the input matrix and g is used for goal. The goal for training a input matrix is set as 2 x 1 matrix .The goal should be set in [0 1]' or [10]' form.

## Figure 5.1.3 Experimental result of Training data

## 5.1.5 Goal Meet

The gradient of successive training is less than 10-10 is meeting when goal of network training is 10-5.

## 5.1.6 Testing data on Neural Network

The command for simulation result is out = sim (net, I).

## 5.2 Fourier Descriptor Feature Extraction

## Fig.5.2.1 Flowchart

## Procedure

## 5.2.1Perform The Normalization Process On Scanned Character.

I am planning to scan each character at 300 pixels per inch using Scanner HP-Scan Jet 11. The scanned character will be converted into 4096 (64x64) binary pixels. The skeletonization process will be used to binary pixel image and the extra pixels which were not belonging to the backbone of the character, will be deleted and the broad strokes will be reduced to thin lines. Skeletonization is illustrated in following Figure 5.2.2.

## A Figure 5.2.2. Skeletonization of a English character

There are lot of variations in handwritings of different persons .After skeletonization process, we need normalization process, that normalized the character into 30X30 pixel character and shifted to the top left corner of pixel window.

## 5.2.2Perform Binarization On Capture Image Of 30 X 30 Pixel.

After skeletonization and normalization processes were used for each character, binarizing the normalized image into 30 X 30 matrixes. The black pixels contain value 1 and white pixels contain value 0.

## 5.2.3 Feature Extraction by Fourier Descriptor

In Feature Extraction , first extract the boundary of a given character .Secondly calculate Fourier descriptor on extracted boundary of the given character.

## Fig.5.2.3.Experimental result

## 5.2.4 Training data on Neural Network

Now 900 x 1 column matrixes is input to the compass operator. The 900 x 1 column matrixes are given as input to the feed forward neural network. The command used for feed forward network named net = train (net, I, g) where I is the input matrix and g is used for goal. The goal for training a input matrix is set as 2 x 1 matrix .The goal should be set in [0 1]' or [10]' form.

## Figure 5.2.4. Experimental result of Training data

## 5.2.5 Goal Meet

The gradient of successive training is less than 10-10 is meeting when goal of network training is 10-5.

## 5.2.6 Testing data on Neural Network

The command for simulation result is out = sim (net, I).

## 5.3 Combined analysis of Fourier Descriptor and Compass operator

When Fourier descriptor combined with compass operator then it increases accuracy and also decreases time for recognizing character. First take the handwritten scanned character and normalized it. Secondly normalized character is taken as input to the compass operator. Compass operator brighten each edge one by one and at last all edge of the given character are brighten. Now take the boundary of the character and then compute the Fourier descriptor.

## Fig.5.3.1.Flow Chart

## Fig.5.3.2.Experimental result of combined approach

## 5.3.1 Training data on Neural Network

Now 900 x 1 column matrixes is input to the compass operator. The 900 x 1 column matrixes are given as input to the feed forward neural network. The command used for feed forward network named net = train (net, I, g) where I is the input matrix and g is used for goal. The goal for training a input matrix is set as 2 x 1 matrix .The goal should be set in [0 1]' or [10]' form.

## Figure 5.3.3. Experimental result of Training data

## CHAPTER 6

## Result and Discussions

## 6.1 Introduction

In this chapter comparison of combined approach of Compass operator with Fourier descriptor against Fourier descriptor Feature extraction and Compass operator Feature extraction. The comparison has been made in terms of training time, classification time recognition accuracy and number of iterations.

## 6.1.1 Compass operator Feature Extraction

An analysis of experimental result has been performed and shown in table 6.1.1

## Input to MLPN

## No. of Hidden Units

## No. of Iterations

## Training Time (sec)

## Classification Time (ms)

## Performance on Training set (%)

## Performance on Test Set (%)

30x30 gradient input

12

50

34.813

0.125

100

95

## Table:6.1.1 Result of handwritten English Character using Back propogation network

This technique requires more time to train the network and require more classification time. The results of structure analysis shows that if the number of hidden nodes increases the number of iterations taken to recognize the handwritten character is also increases.

## 6.1.2 Fourier Descriptor Feature Extraction

Five hundred samples were collected from 10 person, 50 samples each, out of which 250 samples were used for training (training data) and 250 samples were used for testing the data (test data).

An analysis of experimental result has been performed and shown in table 6.1.2.

## No. of Hidden nodes (neurons)

## Learning Rate

## Momentum Factor

## No. of Epochs

## Recognition %

## Training Set

## Test Set

## 12

## 0.2

## 0.8

## 50

## 100

## 89

## 24

## 0.2

## 0.8

## 100

## 100

## 94

## 36

## 0.2

## 0.8

## 200

## 100

## 94

## Figure 6.1.2 Result of Handwritten English Character using MLP

## 6.1.3 Fourier Descriptor Feature Extraction with Compass operator

Five hundred samples were collected from 10 people (male and female), 50 samples each, out of which 300 samples were used for training (training data) and 200 samples were used for testing the data (test data).

An analysis of experimental result has been performed and shown in table 6.1.3.

## No. of Hidden nodes (neurons)

## training time

## Classification time(ms)

## No. of Epochs

## Recognition %

## Training Set

## Test Set

## 12

## 0.1406

## 59.922

## 89

## 100

## 96

## Figure 6.1.3 Result of Handwritten English Character

The Bar chart representation of the comparative analysis of three different techniques in term of Recognition accuracy as shown in below figure 6.1