Detection Of Invariant Features In Image Processing Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This paper presents methods for detecting invariant features in image processing. There are two basic phenomena in the study of invariant feature detection "Repeatability" and "Robustness" on which the whole concept of feature detection depends.

Repeatability is very important feature when it comes to feature detection and Robustness is equally important for feature description. The descriptors give two and three dimensional pixel information which helps in matching and classification.

This paper explains the difference between feature detection and description and includes study on different types of detectors and descriptors with some of the well known examples. Paper also includes brief working of detectors & descriptors like MSER, SIFT, SURF etc.The paper stress on interest point and interest region detectors and descriptors.



Many problems in computer vision rely on feature extraction as primary aspect for analysis and processing, including object recognition, robotic mapping, 3D modelling, video tracking and match moving. The image features are constant even to image scaling and rotation but change a bit illumination and 3D camera viewpoint. [1]

Features should be detected well in advance in both spatial and frequency domains in order to reduce the noise and occlusion disruption.

Feature detection by comparing full algorithm is as good as a feature detector is. The highly distinctive features in image allow every single feature to be compared with large database of features, which interns help in image recognition and detection [2].

There are different invariant features are characterised in order to make image matching easy and more reliable .The characteristics for distinctive features are as follows

Distinctness: Feature that totally differentiate the background from itself.

Invariance: This feature is totally invariant to geometric or radiometric disturbance.

Interpretability: This feature helps interpret related values for further processing.

Stability: This feature tells the ability to with stand the noise disturbance.

Uniqueness: This feature distinguishes a point from rest of the points. [8,2]


A feature is considered as the point starting point of computer vision algorithm and normally defined as the interesting point of an image.

The easiest and convenient way to detect invariant feature is to compare full algorithm. The feature detection depends on concept of repeatability, the operators used to detect the feature are known as detectors.

Detectors are the ones which search 2D locations like a particular point or region and these are invariant of different transformations and have enough amount of information. Detector examines each and every pixel to see there is any feature present or not.

The other important aspect of feature detection is Robustness and operators used are known as descriptors, which are used to characterize the regions and for the object detection and categorization and classification of extracted points and can further used for matching and object detection. These descriptors are robust to the changes in illumination, noise and viewpoint.

There are different type features present in image for example

Edges: Edges are the point where you can find a boundary between two images.

Corners: Corners are also known as interest points, the corners are the point in 2 Dimensional structures.

Blobs: Blobs are commonly known as interest region and it contains a preferred point [1, 2, 4].


The detectors are the operators which search 2D locations like a particular point or region and these are invariant of different transformations and have enough amount of information. Detector examines each and every pixel to see there is any feature present or not. Usually performance of Interest point detectors and Interest Region Detectors is evaluated for criterion and measures. [4]


1) Point Detector

Main properties of point detector are


The accuracy is considered to be the ability to identify the exact pixel location.


The stability is referred to be ability of identifying the same features even after the geometrical transformation.


Ability to detect invariant feature even in the low light is known as sensitivity.


2) Region Detectors

Scale is constant in all direction and it considers the position in on 3D scale-space(x,y and scale).

3) Blob Detectors

Blob detection means detecting regions and points which are either darker or brighter than the surrounding. There are two main types of blob detectors

Based on Local extrema.

Based on differential methods.

4) Corner Detectors

Moravec Detector

Harris detector

5) Edge Detectors [2,4,8]


Maximally Stable Extremal Regions (MSER)

Moravec Detector

Harris Detector

Affine-adapted differential detector

Features from Accelerated Segment Test (FAST) [4,3,8]

3.3.1. Maximally Stable Extremal Regions (MSER)

MSERs are normally comparatively darker, or brighter than their nearest surrounding. The MSERs depends on threshold. The pixels above and below threshold are white and black respectively.

This technique deals with viewing two images from two different viewpoints to correspondences between image elements. This method is normally used in object recognition and stereo matching algorithm. Main properties of MSERs is that it is invariant even there is regular projective transformation, and the other most important feature is that it is robust to photometric changes.

Detection of MSERs using MATLAB

Extracting MSERs

Identification of every MSERs is uniquely done by one of its pixel 'y'and the connected component of same level M(y).

The MATLAB command used for MSER detection is v1_mser

Loading a image 'M'




Test input image

Now the image should be converted into the suitable format


Computing the region seeds and elliptical frames by




Plotting the region frames




For y=s'




The value of matrix O is same to the overlapping extremal regions.





MSERs are extracted for bright on dark and for dark on bright






3.3.2. Moravec Detector

Moravec detector introduced the idea of "point of interest". This detector consider the correlation in four directions and the lowest one is used as reference of interest of interest, therefore it identifies the point where the changes are more comparatively in every direction. The Morvec detector is corner type detector it checks each and every pixel to see whether any corner is present or not, comparing one patch with nearby one. If the pixel is on the edge then nearby patches in directions perpendicular, parallel will look different and similar respectively but if we consider all the four directions then differentiation is quite easy and this was the whole idea behind the moravec detector. [1,2,8]

3.3.3. Harris Detector

Harris detector was based on Morvec's detector but the only difference was that Harris considered the differential of corner with respect to direction .It uses the autocorrelation of image and computes a matrix.

Square of weighted sum between two patches and eigenvalues of the resulting matrix are principal curvatures of the auto-correlation function. An interest point is detected if found two curvatures are high

Let the given image be I. Consider the patch area (u,v) and change in area be(x,y).Then the weighted sum of square differences between two patches is given by:

Now by analysing a detection of invariant feature can be done easily.[8,2,1]

3.3.4. Affine-adapted differential Blob detectors

Affine-adapted differential detector is used to get detector which is more robust to perspective transformations.

It can be devising a blob detector which is invariant to affine transformer,

By applying affine shape adaptation to a blob descriptor, these affine invariant interest points can be obtained or matching the shape kernel with image structure surrounding the blob area image or else a nearby patch is iteratively warped but the shape of smoothing kernel remains the same. [6,2,4,1,8]

3.3.5. Features from Accelerated Segment Test (FAST)

FAST can run in a specific mode designed for reducing the poor matching features detection this type of detector can be an alternative to mainly features.

For given image I, if we have to find the feature F(i), i tends to[1...n]

Features highlighted manually Features highlighted using FAST


The FRBD takes LoG approximation further by 2nd order finite differencing how smoothed image L(x,y,z) changes at a particular pixel.

Fast Radial Blob Detector

Using simple points P[0...8] we can compute average pixel difference around pixel P[0]

F9x,y,z)= abs∑P[0]-P[i] where i tends is 1<=i>=8 …….[4]


Main feature of feature detection are ROBUSTNESS.

Descriptors are used to characterize the regions.

Used for classification of extracted points ,matching, object detection , categorization as well as wide-baseline image orientation.

Descriptors are robust to changes in illumination, noise and viewpoint.



Descriptors are the operator which is used for feature description it considers certain position around an interest point. It gives 2D pixel information which helps in matching and classification. Once the invariant to a class of transformations is extracted descriptors can find the characteristics. It simplifies the object recognition, robot localization, etc

4.2. Types of detectors:

SIFT(Scale invariant feature transformation)

Speeded up robust features (SURF)

4.2.1. Scale invariant feature transform (SIFT)

Scale invariant feature transform is used to extract features which provide description of interesting points, this is used find out the testing image out of all present images. SIFT feature can with stand the affects of image scaling, illumination and various weather related problems. The SIFT is robust to geometrical changes, affine distortion, orientation.

The main function of SIFT is feature matching and indexing, the method of indexing means storing SIFT keys and then comparing matching keys with new images SIFT

SIFT is a combination of feature detector and a feature descriptor. In SIFT an image is framed number of times with reference to the change in viewpoint, illumination and other viewing conditions.

SIFT feature descriptor detects invariant feature either by David Lowe's method or using MAT lab programming. [2,7,8,9]

Extraction of frames and descriptors using MAT lab:

Detectors and descriptors can be accessed by v1_sift MATLAB command

Pfx= fullfile(v1_root,'data','b.jpg');

I= imread(pfx); Input image(down)

The requirement of MAT lab command v1_sift is single gray accurate gray scale image so, the input image has to changed in the appropriate format by

I= single(rgb2gray(I));

Now the computation of SIFT frames is to be done by descriptors by the command

[g,d]= v1_sift(I);

The matrix g has a column for every frame

Now we randomly select the 30 feature by



H1 = v1_plotframe(g(:,sel));

H2 = v1_plotframe(g(:,sel));



Few detected SIFT frames

Descriptors can also be over lay by



Test image for peak threshold

Matching of images by MAT lab

Let the image be Ii and Ij of the same scene and being used for matching with the help of descriptors by algorithm v1_ubcmatch

[gi, dc]= v1_sift(Ii);

[gj, dj]=v1_sift(Ij);

[ matches, scores]v1_ubcmatch(di,dj);

Matching of images

Comparing MAT lab and David Lowe's method

MAT lab assumes that the origin of image to (1,1) instead of (0,0). Reference system used by David Lowe is different.

MAT lab convention(top) and David lowe's convention(bottom)

Comparison of MAT lab and Lowe's SIFT…………[2,8,9,7]

4.2.2. Speeded Up Robust Features (SURF)

SURF is based on the similar properties that of SIFT descriptor but the complexity is reduced to considerable level. Fixing a reproducible orientation by using information available in surrounding regions around the interest point is the first step of SURF, then square region corresponding to selected location is used to extract SURF

The upright vision of descriptor which is robust the rotation of image is known as U-SURF.

Orientation Assignment

In order to find the robustness of descriptor with change in the orientation for interest points the reference axis used are x and y axis and scale used is s scale now considering the circular region of radius 6s around the interest point, the side length of wavelets is 4s.

Descriptor Components

Construction of square region around the located interest point and axis selected in regions. The region is splited into smaller sub regions.



For extracting and matching the information of a particular point or region in an image.

Object recognition.

SURF implementation as Image plug in with a convenient GUI and output of statistics.

The feature detection is used normally in image processing where the reference point changes.

It can be used in digital camera's.


The various detectors and descriptors useful for distinctiveness and robustness, this paper presents the classification of detectors according to the area in which they work best for example on bases of corner detection, edge detection and blob detection this paper also presents the brief working and principle of some widely used detectors and descriptors like MSER, SIFT, Harris Detector, FAST detector and SURF etc.

This paper also explains the MATLAB programming for SIFT and MSER which is an additional method of detecting invariant feature of an image and it also compares the feature description and feature detection. The principle of repeatability is used to explain feature detection and principle of robustness is used to explain the feature description.

The properties of detector is explained as their ability to detect invariant features perfectly even if the conditions are not so favourable, Sensitivity, controllability, stability are few of the properties. The methods used for object recognition and extraction of information depend on certain criteria like the weather, illumination, darkness etc.

The best and most convenient method for feature detection is comparing full algorithm. Nevertheless these descriptors and detector provides enough information for object recognition, computer vision, extraction of information and three dimensional modelling.

As final remark, I should mention that each operator has its requirements and parameters for feature detection.