This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them.
This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuromemetic model  that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuromemetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.
Keywords-OCR, Global Feature, End-Points, Neuro-Memetic
Categories and Subject Descriptor
Image processing and computer vision
Measurements, Performance, Design, Experimentation
Optical Character Recognition [OCR],a character or symbol to be recognized can be machine printed or handwritten characters/numerals .Handwritten numeral recognition is an exigent task due to the restricted shape variation, different script style & different kind of noise that breaks the strokes in number or changes their topology . As handwriting varies when person write a same character twice, one can expect enormous dissimilarity among people. These are the reason that made researchers to find techniques that will improve the knack of computers to characterize and recognize handwritten numerals are presented in . Offline recognition and online recognition is reviewed in [7, 10, 12, 15] and [16, 17] respectively. Some development can be observed for isolated digit recognition because many research scholars [8, 9, 11, and 13] across the global have chosen their field in handwritten numeral/character recognition.
System based Optical Character Recognition (OCR) are now available commercially at affordable cost and can be used to recognize many printed fonts. Even so it is important to note that in some situations these commercial software are not always satisfactory and problems still exist with unusual character sets, fonts and with documents of poor quality. Unfortunately, the success of OCR could not be extended to handwriting recognition due to large degree of variability in people's handwriting styles. Diverse algorithms/schemes for handwritten character recognition have been evolved.
Handwritten Character Recognition (HCR) system typically involved two steps: feature extraction in which the patterns are represented by a set of features and classification in which decision rules for separating pattern classes are defined.
Features can be broadly classified into two different categories Statistical features (derived from the statistical distribution of points like Zoning, Moments, n-tupeles, characteristic lociâ€¦) and structural features (like strokes of line segments, loops and strokes relationâ€¦). Statistical and structural features appear to be complementary, as they highlight different properties of the characters. The statistical approach represents a pattern as an ordered, fixed-length list of numerical values and the structural approach describes the pattern as an unordered, variable length list of simple shapes. Script dependency divides the features in global and local features.
In character recognition problem, the description phase plays a fundamental role, since it defines the set of properties, which are considered essential for characterizing the pattern.
Moments & function of moments have been utilized as pattern feature in number of application. Hu  first introduced moment invariants in 1961, based on the theory of algebric invariants. Using non-linear combinations of geometric moments, a set of moment invariants has been derived; these moments are invariant
under image translation, scaling, rotation & reflection. A number of papers describing application of invariant moment [4, 5] with its types (e.g. complex moments, rotational moments, Zernike moments, Legendre moments etc) have been published. Recently, specialists have made use of moments for feature extraction in different manner. Few reports are on comparative study [6, 7] of Fourier Descriptors and Hu's seven Moment Invariants (MIs). They showed comparatively better results with MIs. A comparison is also made in  with affine moment invariants [AMI]. A. G. Mamistvalov  presented the proof of generalized undamental theorem of moment invariants for ndimensional pattern recognition. He has formulated correct fundamental theorem of Moment Invariants. Using these moments, the onceptual mathematical theory of recognition of geometric figures, solids and their n-dimensional generalization is worked out. Numerous amount of work have been carried out through MIs on English script  and other sub continental language like Farsi , Chinese . Even an Indian scripts like Devanagari , Kannada  etc.
In the field of handwriting recognition, it is now established that a single feature extraction method and a single classification algorithm generally can't yields a very low error rate. Therefore it is proposed that certain combination of features can create better success rates. Three major factors can however justify such an approach (i) the use of several types of features still ensures an accurate description of the characters; (ii) the use of a single classifier preserves fast and flexible learning; (iii) the tedious tuning of combination rule is avoided.
In the present work a standardized database has been created first with respect to variety in handwriting style. The results reported in the paper are more reliable and satisfactory as compared to existing techniques in terms of features, classifiers and environment. The paper is organized as follows. Section 2 deals with introduction to Devanagari characters and method of sampling. In section 3, the fundamental theorem of Invariant Moments is presented. The theory of different methods experimented is discussed in section 4. The section 5 gives details of Gaussian distribution method for recognition. The section 6 provides discussion regarding results and conclusion is summarized in section 7.
2. DEVANAGARI NUMERALS
India is a multilingual country of more than 1 billion populations with 18 constitutional languages and 10 different scripts. Devanagari, an alphabetic script, is used by a number of Indian Languages. It was developed to write Sanskrit but was later adapted to write many other languages such as Marathi, Hindi, Konkani and Nepali. As no standardized database for Devanagari Handwritten Characters is available, first the relevant database has been created. Data is collected from people domain with 10 samples of each number from 20 persons from different fields and age. Data acquisition is done manually. Some of the handwritten samples written by three different persons are shown below.
Fig. 1. Numeral String Samples (Phone Numbers)
3. MOMENT INVARIANTS (MIS)
The moment invariants (MIs), are used to evaluate seven distributed parameters of a numeral image. In any character recognition system, the characters are processed to extract features that uniquely represent properties of the character. The MIs are well-known to be invariant under translation, rotation, scaling and reflection. They are measures of the pixel distribution around the center of gravity of the character and allow to capture the global character shape information. In the present work, the moment invariants are evaluated using central moments of the image function f(x, y) up to third order. Regular moments are defined as 
where for p, q = 0,1,2,â€¦.and Mpq is the (p+q)th order moment of the continuous image function f(x,y). If the image is represented by a discrete function, integrals are replaced by summations. Equation (1) can be written as follows,
The central moments of f(x,y) are defined by the expression
Where X = m10 / m00 and Y = m01 / m00 , which are the centroid of the image The central moments of order up to 3 are as follows
The normalized central moment to shape and size of order (p+q) is defined
The normalized central moment to shape and size of order (p+q) is defined
Based on normalized central moments, A set of seven moment invariants [13,14] can be derived as follows
It has been shown that normalized moments are invariant under translation, rotation, scale change and reflection. In this work each number is scanned as 40 X 40 pixel image. The image obtained represents the number with black color on a white background. The image matrix f(x,y) is processed to obtain the character with white color on black background by image complement. The expressions given by Equations (5) are used to evaluate 7 central moment invariants i.e. (Î¦1 - Î¦7) which are used as features. Further, mean and standard deviation are determined for each feature using 200 samples. Thus we had 14 features (7 means and 7 standard deviations), which are applied as features for recognition using Gaussian Distribution Function. To increase the success rate, the new features need to be extracted based on divisions of the images and other methods.
4. THEORY OF METHODS
Principal Component Axes (PCA):-
Principal Components Analysis (PCA). What is it? It is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analysing data.
The other main advantage of PCA is that once you have found these patterns in the data, and you compress the data, ie. by reducing the number of dimensions, without much loss of information. This technique used in image compression, as we will see in a later section.
This chapter will take you through the steps you needed to perform a Principal Components Analysis on a set of data. I am not going to describe exactly why the technique works, but I will try to provide an explanation of what is happening at each point so that you can make informed decisions when you try to use this technique yourself.
Step 1: Get some data
Step 2: Subtract the mean
For PCA to work properly, you have to subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. So, all the x values have x (the mean of the x values of all the data points) subtracted, and all the y values have y subtracted from them. This produces a data set whose mean is zero.
Step 3 : Calculate the covariance matrix
Step 4 : Calculate the eigenvectors and eigen values of the covariance matrix
Step 5: Choosing components and forming a feature vector. Here is where the notion of data compression and reduced dimensionality comes into it. If you look at the eigenvectors and eigen values from the previous section, you will notice that the eigen values are quite different values. In fact, it turns out that the eigenvector with the highest eigen value is the principle component of the data set. In our example, the eigenvector with the larges eigen value was the one that pointed down the middle of the data. It is the most significant relationship between the data dimensions.
Step 6: Deriving the new data set
This final step in PCA, and is also the easiest. Once we have chosen the components (eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the transpose of the vector and multiply it on the left of the original data set, transposed.
Getting the old data back
Recall that the final transform is this:
which can be turned around so that, to get the original data back,
This makes the return trip to our data easier, because the equation becomes
But, to get the actual original data back, we need to add on the mean of that original data (remember we subtracted it right at the start). So, for completeness,
This formula also applies to when you do not have all the eigenvectors in the feature vector. So even when you leave out some eigenvectors, the above equation still makes the correct transform.
Principal Components Analysis (PCA) is a multivariate procedure, which rotates the data such that maximum variabilities are projected onto the axes. The main use of PCA is to reduce the dimensionality of a data set while retaining as much information as is possible. It computes a compact and optimal description of the data set. Fig. 5 shows a co-ordinate system (X1, X2). Choose a basis vector such that these vector points in the direction of max variance of the data, say (Y1, Y2), and can be expressed as
The PCA features are combined with original MIs features and Image Partition feature sets (1 and 2) and applied to recognition system, the success rate was 85.85%, however this method has given better performance on printed Devanagari numerals.
5. NEURAL NETWORK AS CLASSIFIER
Classification is a process in which feature of an object are used by classifier to map the object into proper object classes. ANN based classifier aperes to be most general & less combersine. The back propagation neural network is used in this research to recognize Devnagari characters. Back propagations one of the most popular supervised training methods for ANN. It is based on gradient descent technique for minimizing the square of error between desire output & actual output. It does not have feedback connections but errors are propagated during training.
In the learning procedure the network undergoes supervised training with finite number of patterns consisting of an input pattern & desired output pattern. One cycle of learning consist of two phases.
1. The first phase is called as forward pass. In this the input pattern is presented to the network. Activation values from each unit are propagated as signal along the forward direction via hidden layer until output of the network is computed in output layer. The output is then compared with desired output pattern to compute the error signal.
2. Second phase is also called as reverse pass. Here error signals are propagated in backward direction from output layer via hidden layer to input. During this phase the error signals for all non-output units are computed recursively. After all errors signals are formed the weights changes can be computed. The weights are updated accordingly. This process is repeated until total error fuels below some tolerance level.
In this paper, an attempt is made to apply a different techniques based on invariant moments for feature extraction. All methods have their respective results, which are found to be promising one if it combined. It was found that it was possible to enhance recognition rate if a character is divided in a systematic manner and features of each divided part are used in recognition system. The PCA method works for balancing the pixel distribution in all the regions of divided image. This method increases the success rate over correlation coefficient method. The three feature sets of division are suggested in the paper leading to 92% success rate in worst possible case.
Variations in writing are covered by three features sets based on partition of an image. The key to recognition system is how to divide a character. The perturbed moments have given very performance on handwritten geometrical basic shapes, but in case of numerals images, the success rate is about 74%. The four methods suggested in the paper are useful, as they help in enhancement of success rate in spite of great variation in character due to different styles of handwriting.