Print Email Download

Paid Writing Services

More Free Content

Get Your Own Essay

Order Now

Instant Price

Search for an Essay


Detection of Invariant Features in Image Processing

1. ABSTRACT

This paper presents methods for detecting invariant features in image processing. There are two basic phenomena in the study of invariant feature detection "Repeatability" and "Robustness" on which the whole concept of feature detection depends.

Repeatability is very important feature when it comes to feature detection and Robustness is equally important for feature description. The descriptors give two and three dimensional pixel information which helps in matching and classification.

This paper explains the difference between feature detection and description and includes study on different types of detectors and descriptors with some of the well known examples. Paper also includes brief working of detectors & descriptors like MSER, SIFT, SURF etc.The paper stress on interest point and interest region detectors and descriptors.

2. INTRODUCTION

Many problems in computer vision rely on feature extraction as primary aspect for analysis and processing, including object recognition, robotic mapping, 3D modelling, video tracking and match moving. The image features are constant even to image scaling and rotation but change a bit illumination and 3D camera viewpoint. [1]

Features should be detected well in advance in both spatial and frequency domains in order to reduce the noise and occlusion disruption.

Feature detection by comparing full algorithm is as good as a feature detector is. The highly distinctive features in image allow every single feature to be compared with large database of features, which interns help in image recognition and detection [2].

There are different invariant features are characterised in order to make image matching easy and more reliable .The characteristics for distinctive features are as follows

· Distinctness: Feature that totally differentiate the background from itself.

· Invariance: This feature is totally invariant to geometric or radiometric disturbance.

· Interpretability: This feature helps interpret related values for further processing.

· Stability: This feature tells the ability to with stand the noise disturbance.

· Uniqueness: This feature distinguishes a point from rest of the points. [8,2]

3. FEATURE DETECTION

A feature is considered as the point starting point of computer vision algorithm and normally defined as the interesting point of an image.

The easiest and convenient way to detect invariant feature is to compare full algorithm. The feature detection depends on concept of repeatability, the operators used to detect the feature are known as detectors.

Detectors are the ones which search 2D locations like a particular point or region and these are invariant of different transformations and have enough amount of information. Detector examines each and every pixel to see there is any feature present or not.

The other important aspect of feature detection is Robustness and operators used are known as descriptors, which are used to characterize the regions and for the object detection and categorization and classification of extracted points and can further used for matching and object detection. These descriptors are robust to the changes in illumination, noise and viewpoint.

There are different type features present in image for example

· Edges: Edges are the point where you can find a boundary between two images.

· Corners: Corners are also known as interest points, the corners are the point in 2 Dimensional structures.

· Blobs: Blobs are commonly known as interest region and it contains a preferred point [1, 2, 4].

3.1 DETECTORS

The detectors are the operators which search 2D locations like a particular point or region and these are invariant of different transformations and have enough amount of information. Detector examines each and every pixel to see there is any feature present or not. Usually performance of Interest point detectors and Interest Region Detectors is evaluated for criterion and measures. [4]

3.2 DIFFERENT TYPES OF DETECTORS:-

1) Point Detector

Main properties of point detector are

• Accuracy

The accuracy is considered to be the ability to identify the exact pixel location.

• Stability

The stability is referred to be ability of identifying the same features even after the geometrical transformation.

• Sensitivity

Ability to detect invariant feature even in the low light is known as sensitivity.

• Controllability

2) Region Detectors

• Scale is constant in all direction and it considers the position in on 3D scale-space(x,y and scale).

3) Blob Detectors

Blob detection means detecting regions and points which are either darker or brighter than the surrounding. There are two main types of blob detectors

· Based on Local extrema.

· Based on differential methods.

4) Corner Detectors

· Moravec Detector

· Harris detector

5) Edge Detectors [2,4,8]

3.3 MOST COMMONLY USED DEYECTORS:-

· Maximally Stable Extremal Regions (MSER)

· Moravec Detector

· Harris Detector

· Affine-adapted differential detector

· Features from Accelerated Segment Test (FAST) [4,3,8]

3.3.1. Maximally Stable Extremal Regions (MSER)

MSERs are normally comparatively darker, or brighter than their nearest surrounding. The MSERs depends on threshold. The pixels above and below threshold are white and black respectively.

This technique deals with viewing two images from two different viewpoints to correspondences between image elements. This method is normally used in object recognition and stereo matching algorithm. Main properties of MSERs is that it is invariant even there is regular projective transformation, and the other most important feature is that it is robust to photometric changes.

Detection of MSERs using MATLAB

· Extracting MSERs

Identification of every MSERs is uniquely done by one of its pixel 'y'and the connected component of same level M(y).

The MATLAB command used for MSER detection is v1_mser

Loading a image 'M'

Pfy=fullfile(v1_root,'data','spots.jpg');

M=imread(pfy);

Image(M);

Test input image

Now the image should be converted into the suitable format

M=unit8(rgb2gray(M));

Computing the region seeds and elliptical frames by

[s,g]=v1_mser(M,'minDiversity',0.7,...

'MaxVariation',0.2,...

'Delta',10);

Plotting the region frames

G=v1_ertr(g);

V1_plotframe(g);

O=zeros(size(M));

For y=s'

T=v1_erfill(M,y);

O(t)=O(t)+1;

End

The value of matrix O is same to the overlapping extremal regions.

Figure(2);

Clf;imagesc(M);

[d,j]=contour(O,(0:max(O(:)))+5);

Set(j,'color','y','linewidth',3);

MSERs are extracted for bright on dark and for dark on bright

[s,j]=v1_mser(M,'mindiversity',0.7,...

'MaxVariation',0.2,...

'Delta',10,..

'BrightOnDark',1,'Darkonbright',0);

[5,8,2,9]

3.3.2. Moravec Detector

Moravec detector introduced the idea of "point of interest". This detector consider the correlation in four directions and the lowest one is used as reference of interest of interest, therefore it identifies the point where the changes are more comparatively in every direction. The Morvec detector is corner type detector it checks each and every pixel to see whether any corner is present or not, comparing one patch with nearby one. If the pixel is on the edge then nearby patches in directions perpendicular, parallel will look different and similar respectively but if we consider all the four directions then differentiation is quite easy and this was the whole idea behind the moravec detector. [1,2,8]

3.3.3. Harris Detector

Harris detector was based on Morvec's detector but the only difference was that Harris considered the differential of corner with respect to direction .It uses the autocorrelation of image and computes a matrix.

Square of weighted sum between two patches and eigenvalues of the resulting matrix are principal curvatures of the auto-correlation function. An interest point is detected if found two curvatures are high

Let the given image be I. Consider the patch area (u,v) and change in area be(x,y).Then the weighted sum of square differences between two patches is given by:

Now by analysing a detection of invariant feature can be done easily.[8,2,1]

3.3.4. Affine-adapted differential Blob detectors

Affine-adapted differential detector is used to get detector which is more robust to perspective transformations.

It can be devising a blob detector which is invariant to affine transformer,

By applyingaffine shape adaptationto a blob descriptor, these affine invariant interest points can be obtained or matching the shape kernel with image structure surrounding the blob area image or else a nearby patch is iteratively warped but the shape of smoothing kernel remains the same. [6,2,4,1,8]

3.3.5. Features from Accelerated Segment Test (FAST)

FAST can run in a specific mode designed for reducing the poor matching features detection this type of detector can be an alternative to mainly features.

For given image I, if we have to find the feature F(i), i tends to[1...n]

The requirement of MAT lab command v1_sift is single gray accurate gray scale image so, the input image has to changed in the appropriate format by

I= single(rgb2gray(I));

Now the computation of SIFT frames is to be done by descriptors by the command

[g,d]= v1_sift(I);

The matrix g has a column for every frame

Now we randomly select the 30 feature by

Perm=randperm(size(g,2));

Sel=perm(1:30);

H1 = v1_plotframe(g(:,sel));

H2 = v1_plotframe(g(:,sel));

Set(h1,'color','k','linewidth',3);

Set(h2,'color','y',linewidth',2);

Few detected SIFT frames

Descriptors can also be over lay by

H3=v1_plotsiftdescriptor(d(:,sel),g(:,sel));

Set(h3,'color','g');

Test image for peak threshold

Matching of images by MAT lab

Let the image be Ii and Ij of the same scene and being used for matching with the help of descriptors by algorithm v1_ubcmatch

[gi, dc]= v1_sift(Ii);

[gj, dj]=v1_sift(Ij);

[ matches, scores]v1_ubcmatch(di,dj);

Matching of images

Comparing MAT lab and David Lowe's method

MAT lab assumes that the origin of image to (1,1) instead of (0,0). Reference system used by David Lowe is different.

MAT lab convention(top) and David lowe's convention(bottom)

Comparison of MAT lab and Lowe's SIFT............[2,8,9,7]

4.2.2. Speeded Up Robust Features (SURF)

SURF is based on the similar properties that of SIFT descriptor but the complexity is reduced to considerable level. Fixing a reproducible orientation by using information available in surrounding regions around the interest point is the first step of SURF, then square region corresponding to selected location is used to extract SURF

The upright vision of descriptor which is robust the rotation of image is known as U-SURF.

· Orientation Assignment

In order to find the robustness of descriptor with change in the orientation for interest points the reference axis used are x and y axis and scale used is s scale now considering the circular region of radius 6s around the interest point, the side length of wavelets is 4s.

· Descriptor Components

Construction of square region around the located interest point and axis selected in regions. The region is splited into smaller sub regions.

.............[5,8]

5.APPLICATIONS

· For extracting and matching the information of a particular point or region in an image.

· Object recognition.

· SURF implementation as Image plug in with a convenient GUI and output of statistics.

· The feature detection is used normally in image processing where the reference point changes.

· It can be used in digital camera's.

Conclusion

The various detectors and descriptors useful for distinctiveness and robustness, this paper presents the classification of detectors according to the area in which they work best for example on bases of corner detection, edge detection and blob detection this paper also presents the brief working and principle of some widely used detectors and descriptors like MSER, SIFT, Harris Detector, FAST detector and SURF etc.

This paper also explains the MATLAB programming for SIFT and MSER which is an additional method of detecting invariant feature of an image and it also compares the feature description and feature detection. The principle of repeatability is used to explain feature detection and principle of robustness is used to explain the feature description.

The properties of detector is explained as their ability to detect invariant features perfectly even if the conditions are not so favourable, Sensitivity, controllability, stability are few of the properties. The methods used for object recognition and extraction of information depend on certain criteria like the weather, illumination, darkness etc.

The best and most convenient method for feature detection is comparing full algorithm. Nevertheless these descriptors and detector provides enough information for object recognition, computer vision, extraction of information and three dimensional modelling.

As final remark, I should mention that each operator has its requirements and parameters for feature detection.