This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
This paper attempts to introduce the concept of moment invariance into the classification algorithm based on the morphological boundary extraction and generalized skeleton transform, and tries to propose a new method about shape classification in this field. Firstly, the method extracts boundary of the object, secondly, skeleton using main skeleton extraction algorithm based on visual important parts, and then improves Hu moment which is traditionally used to region while in this paper is extended to calculate the invariants of the skeleton. The present paper describes respective theories and application of the method from four aspects, including the boundary extraction, skeleton extraction, calculation of moment invariance and classification algorithm.
Keywords: Moment invariance, boundary extraction, generalized skeleton transform, shape classification
One major task in shape analysis is to study the underlying statistics of shape population and use the information to extract, recognize, and understand physical structures and biological objects. Shape classification is one of the central topics in computer vision for the recognition of objects in images. Efficient shape representation provides the foundation for the development of efficient algorithms for many shape related processing tasks, such as image coding , shape matching and object recognition , content-based video processing and image data retrieval. Mathematical morphology is a shape-based approach to image processing . A number of morphological shape representation schemes have been proposed , -. Many of them use the structural approach i.e, a given shape is described in terms of its simpler shape components and the relationships among the components.
Basic morphological operations can be given interpretations using geometric terms of shape, size, and distance. Therefore, mathematical morphology is especially suited for handling shape-related processing and operations. Mathematical morphology also has a well-developed mathematical structure, which facilitates the development and analysis of morphological image processing algorithms. Several groups of features have been used for this purpose, such as simple visual features (edges, contours, textures, etc.), Fourier and Hadamard coefficients , differential invariants , and moment invariants , among others. The theory of moment invariants is derived from the analytic geometry and was proposed by Cayley and Sylvester first. In it Hu  proposed the concept of algebra moment invariants for the first time, and gave a group of algebra moments based on the combination of general moments. The set of moments, Known as Hu moments are invariance in the scale, translation and rotational change of the objects. However, the approach needs to deal with all pixels of the target image region, taking a long time, so has relatively low efficiency. To solve the problem, many work, especially how to calculate the moment invariant of the boundary have been attempted [18, 19].
Shape parameters are extracted from objects in order to describe their shape, to compare it to the shape of template objects, or to partition objects into classes of different shapes. In this respect the important question arises how shape parameters can be made invariant on certain transformations. Objects can be viewed from different distances and from different points of view. Thus it is of interest to find shape parameters that are scale and rotation invariant or that are even invariant under affine or perspective projection.
This work is organized as follows. In the next section, the necessary definitions that are used throughout in this work are provided. A brief introduction to the methods which constitute the basis of our approach is presented in Section II. In Section III, a detailed description of the proposed novel unifying approach is provided. Section IV experiment setup, results and finally conclusions are discussed in Section 5.
Hu's MOMENT INVARIANTS
In this section, a brief review of the Hu's invariant moments is presented. For a digital image, the two-dimensional traditional geometric moments of order p+q of a density distribution function f(x,y) are defined as
and they are not invariant. The double integrals are to be considered over the whole area of the object including its boundary this implies computational complexity of order O(N2). For a discrete function, f(x, y), defined over a domain of MxN discrete points, equation (1) takes the form.
The (p+q)-order central moment is defined as
The total area of the object is given by . Scale invariant features can also be found in scaled central moment's ηpq. The normalized central moment of order (p+q) is given by equation (6)
ηpq = (6)
The present paper is based on applying region based moments derived from boundary based moments for classification of various shape patterns. Initially boundary based moments use erosion residue edge detector, since the morphological edge detector is a basic tool for shape detection. The advantage of mathematical morphology is its geometry oriented nature provides an efficient framework for analyzing object shape characteristics such as size and connectivity, which are not easily accessed by linear approaches. Morphological operations take into consideration the geometrical shape of the image objects to be analyzed.
3.1 Method-1: Boundary based moments using Morphological Boundary Extraction
A boundary point of an object in a binary image is a point whose 4-neighborhood (or 8-neighborhood, depending on the boundary classification) intersects the objects and its complement. Boundaries for binary images are classified by their connectivity and whether they lie within the object or its complement. All information on the shape of an object is, for example, contained in its boundary pixels. The steps involved in boundary extraction, calculation of moment invariants and classification algorithm are shown in Fig. 1 respectively.
Fig.1 Classification of Boundary Moments
For this, first we need to acquire the boundary of the images. The boundary of image I can be expressed in terms of erosions. Boundary extraction of the image (I) is obtained by first eroding I with a structuring element (SE) consisting of all ones in a 3x3 matrix and then performing the set difference between I and its erosion. That is given by equation (7)
Secondly, compute a set of different invariant functions based on the boundary of the object. Hu introduced seven nonlinear functions which are invariants under object's translation, scale and rotation. The set of seven third order boundary central moments are given by the following equations (8)-(14)
BM1 = η20 +η02 (8)
BM2 = ( η20 - η02)2 + 4 η112 (9)
BM3 = (η30 - η12)2 + (3η21 - η03)2 (10
BM4 = (η30 + η12)2 + (η21 + η03)2 (11)
BM5 = (η30 - 3η12)( η30 + η12)[( η30 + η12)2 - 3(η21 + η03)2]+3(η21 - η03)
(η21 + η03)[3(η30 + η12)2 - (η21 + η03)2] (12)
BM6 = (η20-η02)[(η30+η12)2 - (η210+η03)2] + 4η11(η30 + η12)( η21 + η03) (13)
BM7= (3η21-η03)(η30+η12)[(η30+η12)2-3(η21+η03)2]+(3η12-η30)(η21+η03)[3(η30+ η12)2 - (η21+η03)2] (14)
The set of seven boundary moments (BM's) are treated as seven feature vectors. These features are used for the classification problem.
3.2 Method-2: Region based Moments use Morphological Skeleton Transform
Generalized Skeleton Transform
For this, first we need to acquire the skeleton of the images. The skeleton of image I can be expressed in terms of erosions and openings. The generalized skeleton transform is defined by the following equation
Skeletonization is done according to Algorithm 1. Skeletonization is a global space domain technique for shape representation.
Algorithm 1: To find Skeleton of an image
1. Let k=1 and A be the image.
2. E(A,kB),Erode the image k number of times with skeleton primitive B.
3. O(A,B) Open the image with skeleton primitive B.
4. Find the k th Skeleton Subset Sk=E(A,kB)- O(E(A,kB).
7. If (T(A)≠φ) goto 4.
8. SK(A,B) is the union of all the Skeleton Subsets Sk(A).
Finally, region based moments (RM's) are derived from boundary based moments for classification of various shape patterns. Based on this, the 10 region based moments can be derived for object skeleton which are invariant to translation, rotation and scaling as follows.
RM1= (16) RM6 = (6)
RM2 = (17) RM7= (7)
RM3 = (1) RM8 = (8)
RM4 = (4) RM9 = (9)
RM5 = (5) RM10= (10)
Here RM1, RM2, RM3, RM4, RM5, RM6, RM7, RM8, RM9, RM10 is the skeleton moment invariant derived from image according to our approach, which will meet the invariance properties for scale, translation and rotation change.
Entire process for the calculation of the percentage of correct classification (PCC) of a boundary and region moments of a sample sub image is furnished below in algorithm 2.
Algorithm 2: Proposed method for evaluating percentage of correct classification on boundary and region based moments.
Take input Textures Tk, k= 1 to 20.
Subdivide the Tk, into 16 equal sized blocks. Name them as subimage TkSi, k=1 to 24 and i = 1 to 16.
Select at random, a training sample subimage from each Tk, k= 1to 20 and denote it as TkSj where 'j' is any of the sample pieces 1 to 16 of a particular Tk.
Calculate boundary and region moments of each sample.
To classify a subimage, the distance between the training set and the testing samples is measured by distance function D(k) is calculated as follows
The tested set falls into the Class k, k= 1 to 20, such that D (k) is minimum among all the D (k), k=1to 20.
Now for each texture Tk, k=1 to 20, evaluate the PCC and list the output in the form of table.
4. Texture training and classification
Each texture image is subdivided into 16 equal sized blocks out of which 1 randomly chosen blocks are used as sample for training and remaining blocks are used as test samples for that texture class.
In the texture training phase, the texture features are extracted from the sample selected randomly belonging to each texture class, using the proposed boundary moments methods. The features for each texture class are stored in the feature library, which is further used for texture classification.
In the texture classification phase, the texture features are extracted from the test sample x using the proposed method, and then compared with the corresponding feature values of all the texture classes k stored in the feature library using the distance vector formula,
where, N is the number of features in f, fj(X) represents the jth texture feature of the test sample x, while fj(K) represents the jth feature of kth texture class in the library. Then, the test texture is classified as kth texture, if the distance D(k ) is minimum among all the texture classes available in the library.
4.1 Experimental Results and Discussions
The above scheme of classification based on extraction of boundary and region moments is applied on 20 textures shown in fig.3 with a resolution of 256x256 obtained from VisTex  a standard database of marble and granite. Texture classification can be done by distance function. Experiment results in table 1 show that the proposed BM and RM descriptor can classify the object shape rapidly and accurately and moreover it has the better classification ability than the Hu invariant moment. Experimental results have also shown that these methods are strictly invariant for continuous function and associates less computational time. The percentage of correct classification of region moments is above 91% when compared to boundary moments, but whereas boundary moments have a classification rate of 86%.
From fig. 4 show that BM and RM clearly better distinguishes all the texture images and hence can be used in shape based classification process.
Fig.3. Original Textures of resolution 256x256.
Table 1: Percentage of Correct Classification using BM and RM methods.
% of BM
% of RM
Fig. 4. Percentage of Correct Classification graph on BM and RM methods.
In this paper, a new method for computing boundary and region moment invariants was presented. The methods are based on Hu moment invariants. BM1, BM2,…,BM7 can be obtained by the extension of Hu moment invariants and RM1, RM2,…,RM10 are the extension of boundary moments. Geometry shape of objects can be described by RM1, RM2,…,RM10. It is proven that RM1, RM2,…, RM10 meet the structure of translation, scaling and rotation invariance. Firstly, the present study concludes that it is always better to classify (i.e 86%) textures or any images based on boundary moment and that it has good discrimination power in classifying object patterns. Secondly, the region moments are also more classify than boundary moments. The percentage of classification on region moments attains 91% to classify textures. Finally both the methods have strong discrimination power and do not indicate ambiguity in classifying them.