Automatic Number Plate Recognition Anpr Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Only recently, mobiles are being used and programmed in such a way, that there is very diminutive difference left between computers and mobiles. Such robust mobiles have seeped into the market with powerful processors and voluminous memories. The technology is advancing every quotidian. And it is the need of the hour to use such vigorous and powerful platforms to implement the Imaging techniques, and contributing a diminutive amount to the vast, gigantic and diverse field of Imaging.

The main goal of this project is to create a mobile platform application which is capable of detecting and extracting the information of license plate in real time using the camera of the mobile phone itself. Although this project has been implemented already, but that is only limited to non-portable static devices like CCTV networks etc. Our goal for this project is to take this application to the next level i.e. to a portable device such as mobile phone.

Image processing applications such as Automatic Number Plate Recognition (ANPR) are fairly new in the mobile phone area mainly because only recently such powerful mobile phones have been introduced in the market which can handle the complexity of such applications. The image processing techniques are nowadays commonly being implemented on portable devices. So, the research includes the usage for such powerful mobile phones in implementing the image processing applications.

In our project we have chosen Android platform as it is open source and its SDKs are easily available. Mobile phones with Androids have currently the highest hardware specifications that are available in the market. And we have chosen an Android hardware, i.e. HTC mobile phone, as it is user friendly and can be programmed easily with Java running at the backhand. The HTC can also support image libraries with its 528 MHz processor.

This project includes the development of the license plate detection and recognition algorithm. Implementation of the algorithm on MATLAB, porting the method to java using Ellipses. And then testing it on the devised hardware i.e. HTC mobile phone.

Through this project we are able to detect and recognize the license plate of a moving vehicle through a streaming video at runtime.

1.2 Background

ANPR is mass surveillance method that uses character recognition on images, to read license plates on the vehicles. Closed circuit cameras (CCTV) and road-rule implementation cameras that are specifically designed to perform a certain task are used to implement and improve the condition of the traffic or the individuals, by the security institutions and police forces.

ANPRs can be used to store the image and characters of the license plates of the vehicles, and also a photograph of the drivers with some configurable. ANPR tends to be region specific due to the plate variation from place to place.

Recent advances in technology have taken automatic number plate recognition (ANPR) systems from fixed applications to mobile ones. Scaled-down, lower cost components have led to a record number of deployments by law enforcement agencies around the world. Smaller cameras with the ability to read license plates at high speeds, along with smaller, more durable processors that fit in the trunks of police vehicles, allow law enforcement officers to patrol daily with the benefit of license plate reading in real time, where they can resist and control an illegal act immediately.

Automatic License Plate Detection is also known by various other names:

Automatic License Plate Recognition (ALPR)

Automatic Vehicle Identification (AVI)

Car plate recognition (CPR)

License Plate Recognition (LPR)

1.3 Goals and objectives

The Goal of this project is to create an android application which is capable of capturing the license plate of an image taken from the camera at runtime and recognizing the license plate.

Developing a method on MATLAB for License Plate detection and recognition.

Extracting the license plate from the real time images and videos.

Porting the method for ANPR to Java, on Android platform.

ANPR through a mobile phone from Real-time Images.

1.4 Methodology

This method will use a real time image of a car with its license plate. The user will take a picture of a car through the mobile phone which contains the ANPR software. The image will be processed in the ANPR software, and within 5 seconds the output will be generated. The output will contain the recognized characters of the license plate of the stated car.

Flow diagram







1.5 Chapter Overview

Second chapter "Literature Overview" contains background of the project. It covers the related works that have been done so far. The background material and literature is reviewed to develop a better understanding of the project.

Third chapter " " System framework is discussed. The system framework includes the various algorithms, and the block diagrams of the methods and frameworks we shall be following for the reader to develop a better understanding and easily comprehendible.

Forth chapter " " shall exhibit the results of the frameworks and methods used in the previous chapters.

Fifth chapter "Conclusion and Future enhancement" consists of a brief conclusion and a single page for future enhancement.



2.1 Literature Review

License plate detection has been the main area of study in vehicles related researches. Its identification and approval has been the major goal. Many methods have been deployed to achieve it. it is done through a step by step procedure of identifying the license plate and then verifying it. Initially the method that starts this is proposed by Cl´audio Rosito Jung and Rodrigo Schramm [1] "the detection of the rectangle" (shape of the LP) that approves the existence of the license plate. It is done through a windowed Hough transform that takes the edges, finds the peak points and searches out the orthogonal pairs of parallel lines. Their intersection generates the rectangle. This method had some flaws as it introduced duplicate images and works unsatisfactorily in presence of noise. After this shape detection the next step is detection of vehicle's LP based on sliding concentric windows and histogram well-defined method proposed by Kaushik Deb, Hyun-Uk Chae and Kang-Hyun Jo[2]. it was done through image segmentation that do analysis of vehicles moving on roads and extract their LPs by vertical and horizontal regions and compose a candidate region, then verified their color through HSI MODEL in which an image's RGB model is transformed to HSI model and finally decomposed the candidate region and figured out the alphanumeric characters written on them. This method was applied to both green & yellow and white LPs with almost same procedure. This technique gained success but was quite sensitive to view angles, physical appearance and environmental conditions. It was further improved by another methodology proposed by Huaifeng Zhang, Wenjing Jia, Xiangjian He, and Qiang Wu [3] that did real time LP detection in presence of various conditions using statistical and HAAR-LIKE features. In statistical analysis gradient density and variance was found that eased the algorithm and Haar-like features included ad boost algorithm that improved detection and reduced false detection rate. This method surpassed the previous one [2] as it pondered upon the varying climatic, color, position and view-angles effects.

2.1.1 Rectangle Detection based on a Windowed Hough Transform

Hough transform is used to detect rectangle dimensions in its own domain of any unknown shape or size (dimension or orientation). It does so by converting the edge points of x y-plane to angle (theta) and distance "p (row)" that forms a function. The local maxima generated by that function C (p, Theta) detects the line segments passing through edge points. The method is elaborated as under

Any rectangle with origin as a centre point is chosen with 4 vertices being parallel to each other (adjacent sides). In Hough space these 4 sides would be treated as 4 peaks that would satisfy some geometric relations as:

The sides appear in pair. Two adjacent peaks correspond to the same sides.

Peaks of the same pair are symmetric about the axis

Both pairs are separated by angle of 90

Peaks of the same pair will have same height (horizontal distance) and same vertical distance between them.

C:\Users\Abdul Rehman Rashid\Desktop\Thesis\Rectangular Hough Transform\The Hough Transform of Rectangle centered at the origin.PNG

Fig 2-1 Hough transform of Rectangle Centered at the origin

Now the main algorithm works precisely on the above results. Hough Transform of each pixel's edge side is calculated to find the peaks and then approved by the confirmation of above.

In order to compute Hough Transform take a pixel along with a neighbor centered pixel. A region is chosen that has some minimum and maximum distance such that the rectangle would have all of its edges inside that region. In this way Hough Transform is found by using various equations. Next step is to find out the local maxima of that Hough Transform. For this, first Hough image is enhanced by using a butterfly evaluator. Then local maxima found by it is compared to the function C (p, theta) and if satisfied is chosen as the peak.

Now in order to find the 4 peaks out of the all peaks, all peaks are paired one by one and compared to the above four relationships. Different parameters are defined like angular threshold, distance threshold in order to find out that whether the peaks chosen are parallel or symmetric or not, respectively. Then extended peaks are calculated that corresponds to a pair of peaks. From them only those are chosen that have orthogonal pairs of parallel lines. Finally, the vertices are obtained through the intersection of the two pairs of parallel lines. In this way a rectangle is detected.


As it has seen that the method has been so far very efficient but due to prescience of various thresholds duplicate rectangle can be generated due to presence of the neighboring centers. Now in order to remove it we need to compute error for each rectangle and chose the one with the smallest degree of error. In this way it can be resolved. The method is very efficient as it can find out the rectangle with unknown dimensions and orientations but it has various flaws such as:

When applies to natural and synthetic images, showed good results but only when efficient edge detector is used.

Arises duplicate rectangle, can minimize it but not completely overcome it.

In presence of large noise, uneven noise edges add unwanted (false) peaks in Hough image. Hence, due to it line segments may not b correctly detected.

Can produce false results if the aligned rectangles are too close to each other. It can be overcome if the average gray level of the desired rectangles is known, in those ways such false rectangles can be easily removed by comparing the intensity inside the rectangle with the expected average gray value.

C:\Users\Abdul Rehman Rashid\Desktop\Thesis\Rectangular Hough Transform\Multiple Detection of same Rectangle.PNG C:\Users\Abdul Rehman Rashid\Desktop\Thesis\Rectangular Hough Transform\Detected Rectangle with minimal Error.PNG

Fig 2-2 left to right (a) Multiple Detection of same Rectangle (b) Rectangle with minimal Error

2.1.2 Real-Time License Plate Detection under Various Conditions

In this technique two main features have been used in the algorithm. I.e. Statistical and Haar-like features. Certain classifiers are based on statistical features that ease the algorithm. Whereas, classifiers for Haar-like features improve detection and reduces the false detection rate. The main focus of this method is to detect LP with varying environmental effects, color, size, position and viewing angles.


The method starts with the construction of a CASCADE classifier that enhances the detection speed of the LP's. In this classifier there are number of layers in which the first 2 layers are based on statistical features and rest of them work on Haar-like features. The explanation of both is given as under: STATISTICAL FEATURES DETERMINATION

In this step first two types of samples are taken that are scaled to 48*16 for convenience:

Positive samples : labeling license plate region from the vehicle image

Negative samples: images that exclude license plate. They can be the vehicles that does not contain LP (license plate) at all or randomly taken snaps.

After this all statistical feature are calculated. Then all the positive samples are tested and approved to be positive by comparing to a selected threshold. On this basis the first classifier is obtained. Then all positive and negative samples are subjected to the next (2nd) layer where another parameter DENSITY VARIANCE is found. In this way at each layer the samples are tested and the positive classified ones are trained to the next layer. The process is continued until a false positive value is found.

Fig 2-3 The working flow of cascade filter, where 1, 2 and 6 represents the layers


In this stage, the size of the block is changed depending on the scale of the searching block. Each pixel of the image is tested throughout using a mask of 48*16 in order to verify the existence of the license plate. As shown in above figure1 the positive outcomes are forwarded to the next layer whereas the negative are rejected.


All LP's have some common characteristics such as edges are defined, vertical and distributed uniformly all over the LP. Based on this various statistical features are defined that are elaborated below:-

Vertical Gradients Image

It is generated by the convolving original image and the x-direction Sobel operator. Through this the main focus is on the vertical edges only since it contains much of the information. Background regions are mostly eliminated in it.

Gradient Density

It tells the density of edges of the block. The x-direction Sobel gradient operator, as calculated above, produces a gradient map whose values are normalized.

Density Variance

This parameter is redefined in order to discriminate license plates from background regions. For this the LP is divvied into 12 equal sized sub-blocks. On the basis of this density variance is defined as a ratio to the mean gradient strength of the block. its value is confined from 0 to 1 and remains this low as long as the gradient all over the block is same(strong or weak).

Haar-like Feature

This feature contains rectangles of adjacent image regions. Its value is the difference between the gradient magnitude in white rectangles and grey rectangles.

Fig 2-5 Four types of Haar like Features

It tells the size, type and position of the rectangle. It can capture the interior structure of objects that are invariant to certain transformations. But it captures too many features that increase the complexity of the method. Hence we use the following one;


It selects only small number of features. a weak classifier is made that includes only one feature. Samples are passed and re-weighted through each weak classifier. Training via this way ends up to a false positive rate. After that a strong classifier is made by combing all the weak ones.


The method was a great success. Various images taken at different angles/view points, conditions, colors and styles when subjected to this algorithm gave a success of 92%.

Fig 2-6 Detection Result of some Vehicle Images


With legible flaws, this algorithm has worked well so far. Its advantage is that it can work in any type of complex environment. Training and testing along with statistical and haar0like features not only excluded background regions, but also enhanced the efficiency and eased the structure of the method

Sliding Concentric Windows and Histogram

It is an enhanced method to detect vehicle license plate employing a new way of image segmentation. It comprises of the following three stages:-

Introducing SCW (sliding concentric windows) as an image segmentation technique for analyzing road images containing vehicles. Extracting License Plate from there by finding vertical and horizontal edges from vehicle region and finding out candidate region.

Verifying its color on the basis of HSI MODEL.

Decomposing candidate region and detecting vehicle license plate region.

C:\Users\Abdul Rehman Rashid\Desktop\Thesis\VLPD\General Scheme for detecting License Plate Region.PNG

Fig 2-7 General Scheme for detecting license plate Region

As seen from the above figure the input image is the grey image (colored RGB image is converted to grayscale image) since it improves image processing speed. It is of 8-bits and is mostly referred as monochrome. It only contains the brightness information but not of the color. The above three stages of this algorithm are explained as follows:-


In this a new technique has been introduced of SLIDING COCENTRIC windows that firstly calculate the regions off interest (ROI). It works upon the method of measurement of standard deviation as given below:-

Formation of two concentric windows

Calculation of the standard deviation of each pixel of both windows

Comparing the deviation ratio of each to a threshold as set by user. If the value exceeds then the central pixel of the windows is considered to belong to a vertical and horizontal region, in this way new image's pixel is set to 1 otherwise zero

In this way the window moves and covers all the pixels of the image and we find out the vertical and horizontal regions. After SCW the resultant image is the BINARY image. After that CONNECTED COMPONENTS LABELING technique is used. It scans the image, group's pixels into connected components, assigning the pixel the same value as of its component. For labeling it a recursive method is used. Through this labeling method we find out the candidate regions that may include LP regions as calculated from the previous step.

In this step another important parameter is used .i.e. ASPECT RATIO. This ratio depends upon the maximum and minimum values of the rows and columns. Only those regions are considered to be candidate plate regions that have:-

Aspect ratio greater than 1 but less than 2 [for green LP]

Aspect ratio greater than 1 but less than 3 [for white LP]

Aspect ratio greater than 1 but less than 3 [for yellow LP]

The basic steps for candidate region detection of both white and green background are:-

Initial image.

Converted to gray image.

Detecting vertical and then horizontal edges.

Applying image masking operation.

Applying inverse operation.

Detecting sub-candidate region and then labeling it.

Candidate region detection.


For this we use HSI MODEL in which histogram operations and intensity transformations are performed on an image in the HSI color space. Each colored image is of RGB format in which each color is represented on the corners of three axis if a 3-D cube in RGB model. While in HSI MODEL these colors are described on the basis of their hue, saturation and intensity and their color space is shown in the form of a diamond.

In this model:-

H is an angle that by adjusting hue is varied from 0 degree (red color) through 60 (yellow), 120(green), 240(blue) and back to red at 360±.

S is SATURATION that corresponds to the radius, varying from 0 to 1. When S = 0, color is a grey value of intensity 1. When S = 1, color is on the boundary of top cone base.

Intensities I vary along Z axis with 0 being black and 1 being white.


In this stage we only consider the area of interest. Regions without interests are ignored. When green and yellow LP is under consideration then following steps are taken:-

Move horizontally in the histogram and 2 rows will be present that are processed separately

For UPPER ROW: reject the region of no interest. Consider now the vertical position histogram. In upper row 2 plate fixing dots are mostly encountered. The right dot is not visible as it is in green color. so left dot is also ignored and individual alphanumeric characters are hence found by this vertical position histogram

For LOWER ROW: vertical position histogram is performed and from it the alphanumeric characters are extracted.

For white LP's same procedure is applied, only difference lies there that it has only 1 row to b processed by vertical histogram and the rest method is the same.


The method has been so far quite fruitful and gives satisfactorily result but some of the drawbacks are:-

Sensitive to angle views.

Sensitive to physical appearance (gave false results with stickers or stamps attached on the surface).

Affected by environment conditions.


Every technique is unique with respect to its functionality as well as every technique has some limitations and drawbacks. Our objective is to overcome the maximum issues faced in previous researches. Our project also has some weaknesses but most of the problems faced in previous projects have been solved. In our project, we mainly focus on the efficiency, all the previous algorithms give us the result in about more than 10 seconds, and we have developed an algorithm which gives us the result in 1 second.


2.2.1 What is Google Android?

"Android is a software stack for mobile device that includes an operating system and key applications". It is a mobile platform that is open and free. Android was co-founded by Andy Rubin and Waslater acquired by Google. It was founded to promote and support the open source operating system based on Linux. The third party developers can create applications, which are written in java programming language based on Linux Kernel, using Android SDK, JDK and Ellipse IDE version 3.2 or any latest version of Ellipse IDE, with the rich set of Google Android API (Application Programming Interface). Android Market is an open that allows consumers to search, purchase, download and install various types of contents Features of Android


Android support connectivity technologies including





Media Support

Android supports the following audio, video and still media






Language Support

Android applications are written in following languages




Android Market is an open content distribution system that allows consumers to search, purchase, download and install various types of contents.

Android has following advantages over other operating systems

Android is the only free open source Operating system that allows access to all levels of the OS that distinguish it from other operating systems.

It does not require redevelopment for porting among different handsets since Java is utilized as the programming language.

The development process is relatively faster than other mobile OS since Java is easier to code compared to others like C/C++.

Android has higher value to offer to the consumers in terms of security, as it uses permission and user authentication. Android Architecture

The main components of the Android architecture are


Application Framework


Runtime Android

Kernel - Linux: 

1. Applications: These are applications written in Java. Some of basic applications include a calendar, email client, SMS program, maps, Web browser and others.

.2. Application Framework: This is the skeleton or framework which all android developers have to follow. The developers can access all framework APIs an manage phone's basic functions like resource allocation,  switching between processes or programs, telephone applications,  and keeping track of the phone's physical location.

3. Libraries: This layer consists of Android libraries written in C, C++, and used by various systems. These libraries tell the device how to handle different kinds of data and are exposed to Android developers via Android Application framework. These libraries include media, graphics, SQLite etc.

4. Runtime Android: This layer includes set of base libraries that are required for java libraries.  Every Android application gets its own instance of Dalvik virtual machine. Dalvik has been written so that a device can run multiple VMs efficiently and it executes files in executable (.Dex) optimized for minimum memory.

5. Kernel - Linux: This layer includes Android's memory management programs, security settings, power management software and several drivers for hardware, file system access, networking and inter-process-communication.

The block diagram of Android Architecture is given below

Fig 2-8 Android Architecture

2.2.2 MATLAB

"MATLAB is a user-friendly, matrix-based numerical programming tool that eases the computation and visualization of data and functions in a much faster and convenient way"

MATLAB is an abbreviation of Matrix Laboratory. It is a popular Mathematical Programming Environment used extensively in Education as well as in Industry. The trick behind MATLAB is that everything is represented in the form of arrays or matrices. Mathematical Operations starting from simple algebra to complex calculus may be conveniently carried out using this environment. The main use of MATLAB in Software Development is Algorithm Design and Development. Code developed in MATLAB can be converted into C, C++ or Visual C++.

It finds its vast implementation in the following fields.

Mathematical data

Statistical data analysis

Image processing

Control system design and automation

Signal processing Image Processing in MATLAB

MATLAB is used to handle images in a step by step procedure of first loading an image, storing it in a defined data type/format, displaying it and converting from one form to another, etc. An image is composed of a tiny pixel. Each image is of dimension m (rows) * n (columns), a general matrix format. If the dimensions are too larger then they can be compressed using various Fourier analysis techniques. Some of the features of MATLAB regarding image Processing are as fellows.

Images can be conveniently represented as matrices in MATLAB.

One can open an image as a matrix using "imread" command.

The matrix may simply be m x n form or it may be 3 dimensional arrays or it may be an indexed matrix, depending upon image type.

The image processing may be done simply by matrix calculation or matrix manipulation.

Image may be displayed with "imshow" command.

Changes image may then be saved with "imwrite" command.

Some of the Basic Image Processing Commands in MATLAB and mentioned below

Name of Command



Read an image. 


Write an image to a file.


Display an image represented as the matrix X.


Crop the image with the given dimensions.


Convert the image to gray Scale.


Resize the image with the given dimensions.


 Convert image to binary image, based on threshold. Images Types in MATLAB Image Processing Toolbox

Image Processing Toolbox in MATLAB supports Four Basics Types of Images

Gray Scale Images

Binary Images

Indexed Images

Intensity Images



3.1 Methodology

This project is software based in which an android application is created which will extract number plate from still image /video frames and recognized its number. This application run on android mobiles, the user who have android phone and have that application installed on it can easily use this application by pressing its icon and take a still picture or record video. This application will also extract plate on real time.

3.2 Design Description

3.2.1 Image Capturing

HTC Desire Mobile phone camera is used to take images of Vehicle after specific time interval.HTC Desire mobile phone camera picture is shown in fig 3.1[1].

Camera resolution is set to 1 megapixel for fast processing .The captured images are used in making of MATLAB and android codes. To access HTC Desire mobile phone camera in real time a set of code is written down to interfaced with android hardware.


Fig 3.1 HTC Desire Camera Image

3.3 Architecture Overview

The design of the application is explained graphically with the help of a diagram shown in Fig 3-2. The diagram explains the overall interactions of the application.

Video /still Image


Edge using Sobel operator


Connected component Analysis

Check aspect ratio of objects

Find vertical edge density

Check uniform distribution

Plate Detection


If car is moveable start tracking car using background subtraction

Compute the Global Threshold

Segmentation of number plate

Convert to binary Image

Remove small obstacles

Apply connected component Analysis

Check Euclidian distance

Recognition of Characters

Segmentation and Recognition

Fig 3.2 Basic Architecture

3.4 Algorithm

In this project we review many literatures to select best method but all method gives result after several seconds so by doing some experiments we develop our own algorithm which gives result with in a second. So this method is more efficient than other. As our requirement is efficiency and low processing time because we have to port same algorithm to android. In this method we Firstly convert the still image of size 640 X 380 to gray scale and then find its edge using Sobel operator as result will be thin edge lines so dilate edged image and resultant image will give us thick edge .After it labeling the connected components by doing this all connected components will be labeled randomly and check connected component result lies between 700 to 2500 ,if it is than it will check if connected component lies in aspect ratio of 1.6 to 2.5 (which we find using hit and trail method) defined than it will find vertical edge density compared to object size and check its uniform distribution and compute global threshold. Result will send for segmentation. In segmentation input image will be converted to binary and removed small objects and apply connected component analysis and then result will send for recognition ,where Euclidian distance will be calculate and recognition of characters will be achieved .Methodology of project is shown in Figure 3.3. Detail of Algorithm discussed in chapter 4.


4.1 Plate Extraction

4.1.1 Image Enhancement

Image must be enhanced enough to give the maximum detail and to minimize the environmental effects. Images of Vehicle are taken from android camera and that RGB image is used in MATLAB/Android. It is a 3 dimensional 8-bit image with 256 colors in each dimension. In image processing each pixel has a value according to its color.

Number plate detection method is explained below with the help of images. These images are taken from that camera which we have used in our project.

4.1.2 Grayscale Conversion: -

First of all, RGB image is converted into Grayscale image. There is a built in function in MATLAB but for android we make our own function which will convert RGB image into Grayscale image by using the formula

Gray value = 0.3*R+0.59*G+0.11*B.

Grayscale image is a 2 dimensional 8-bit image. An 8-bit grayscale image contains 256 colors. In 8-bit grayscale image '0' represents a pure black pixel and 255 represent a pure white pixel. As we increase the values of pixel, its color becomes lighter, so that black pixel becomes gray and then it becomes white. An RGB image and a grayscale image are shown below:


Fig 4-1 Left to right (a) RGB image (b) Gray Scale image

4.1.3 Edge Detection

Grayscale image is converted into an edged by using Sobel operator. It returns a binary image of the same size as input image, with 1's where the function finds edges in input image and 0's elsewhere. It can be explained as an edged image contains edges where intensities of colors changes immediately in input image and it contains nothing where it finds constant intensities.

There are many techniques to find the edge of an image like Sobel, Prewitt, Roberts and Canny etc. We have used Sobel method to detect edges of an image. Sobel edge detector is a first derivative operator. Sobel operators are shown in Figure 4.2[2].

\mathbf{G}_y = \begin{bmatrix} -1 & -2 & -1 \\ \ \ 0 & \ \ 0 & \ \ 0 \\ +1 & +2 & +1 \end{bmatrix} * \mathbf{A} \quad \mbox{and} \quad \mathbf{G}_x = \begin{bmatrix} -1 & 0 & +1 \\ -2 & 0 & +2 \\ -1 & 0 & +1 \end{bmatrix} * \mathbf{A}

FIG: 4-2 Sobel operators

Edge map of the picture is taken by using Sobel Operators is shown below.


Fig: 4-3 Edged Image

4.1.4 Dilation

Edged image is converted into dilated image. It returns a dilated binary image of the same size as input image. The main rule of dilation is the value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood [i]. As Edged image shown in Figure 4.3 gives thin edged lines so by doing dilation using 3X3 window size edged lines will be thick. The resultant dilated image is shown below


Fig 4-4 Dilated Image

4.1.5 Connected Components and estimation

Dilated image is taken as input in this function. In this function different connected objects in the image are labeled separately, like front light will be labeled 1 and tire labeled 2 similarly number plate will be labeled to some number. And by doing experiments on different images we found that number plate and number plate like objects lies in estimation between 700 to 2500.we found estimation using formula


Where 'r' represents the rows of labeled image .The results of some labeled pictures are shown below


Fig 4-5 Different Labeled Components

4.1.6 Aspect Ratio and density

Figure 4.5 shows that many false regions also detected with number plate. To remove these false regions the vertical edge density and vertical edge uniform distribution is calculated to filter out false regions future more an aspect ratio is defined which will remove false regions and only number plate will be detected. The aspect ratio is defined by doing some experiments on different kinds of images captured by HTC Desire mobile phone which are shown in figure 5.1.1.

To calculate aspect ratio we first find smallest component (minimum number) in rows and columns and then find largest component (maximum number) in rows and columns

rmin=min(r) rmax=max(r);

cmin=min(c); cmax=max(c);

And after that find 'a' and 'b' using equation

a=cmax-cmin+1; b=rmax-rmin+1;

To find aspect ratio we will divide 'a' to 'b' using formula


So those connected components which will lie between aspect ratio of 1.6 to 2.1 will be number plate. Result is shown in Figure 4.6


Fig 4-6 Extracted Number Plate

The block diagram of Methodology for "Number Plate Extraction" is given below










Fig 4-7 Block Diagram of Number Plate Extraction Methodology

4.2 Segmentation

The detected number plate which is shown in Figure 4.6 will be used for segmentation. Segmentation will extract characters and numbers from input number plate. The Block diagram for segmentation is given below.













Fig 4-8 Block Diagram of Segmentation

Number plate Segmentation method is explained below with the help of images.

4.2.1 Image Enhancement

Number Plate Image must be enhanced enough to give the maximum detail and to minimize the environmental effects. These images are firstly converted into GRAYSCALE and then further techniques are applied to that Grayscale image.

4.2.2 Global Threshold

Grayscale image of number plate which is to be segmented is passed through a phase in which we compute threshold of input number plate using Otsu's method. The algorithm assumes that the image to be threshold contains two classes of pixels (e.g. foreground and background) then calculates the optimum threshold.

4.2.3 Binary Conversion

The 2nd step is to convert input image to binary image by apply threshold value computed earlier. The output image will replace all pixels in the input image with 1(white) if their values are greater than threshold otherwise to 0 (black).The output image is shown in fig 4.7


Fig 4.9 Binary Converted Number Plate

4.2.4 Connected Components Analysis

Fig 4-8 Number Plate with Boundary Box on Each Element

4.3 Character recognition

For character recognition a template matching technique has been used. This process involves the use of a database of characters and is known for its accurate results. As the characters of the number plates are same for a specific location/country, the template matching technique is the most suited and efficient technique which can be used for the recognition of characters.

A simple flowchart is given below to describe the outline of this process:

The process of character recognition shown by the above flowchart is explained in detail below:

Loading of the templates

A "Template" is a small image of a character, which is used as a model by the program to recognize and establish identity with the given input character.

For example a template "M" is shown in the figure below.


A cell array of 62 such templates is loaded ranging from 0 to 9, and all the capital and small alphabets.

4.3.2 Resizing the images

An image of a character to be recognized is fed into the program function as an input. Both the input image and the template cell array is converted into gray scale images and resized to same sizes so that the Euclidean distance can be calculated latter.

2 2 3 4 5 6 ……………

Fig 4-9 left to right (a) Resized Input image (b) Resized templates

Computing the Matching Range via Euclidean distance

For recognition to occur the current input character is compared to each template and either the exact match or the closest representation of an input character is found.

The calculation of the Euclidean distance between the input image and the template is done as follows:

Let the input image be represented as I(x,y) and the template is T(x,y). The output will s(I,T) which will be the value returned showing how well the input image I(x,y) matches the template T(x,y). The s(I,T) is calculated by the following formula:

S(I,T) = ( (I(x,y)-T(x,y)).^2).^(1/2)

After the calculation of s(I,T), the template with the closest representation of an input character is determined as the best matching character with the least distance or distance value equal to "0" i.e. a perfect match.

Output the recognized character

The best value of the resultant Euclidean distance formula determines if the character is a match or not. If an appropriate match is found, its respective index it set accordingly. The index of the best match is stored as the recognized character.


If car is moveable and then it will start tracking. Tracking will display the location on axis where vehicle go. The purpose to doing this is to find location of car and its direction. Tracking is achieved by doing background subtraction. The overall working of this method is discussed below

As real time video/stored video is the sum of multiple images. So frames are coming from video and converted into grayscale shown in Figure 4-10


Fig 4-10 Grayscale image

Grayscale images are stored to cell array one by one. In our method we stored first 400 frames in cell array and then taking their geometric mean by adding all 400 frames and divide their result with 400.The result is shown in Figure 4-11.

µ= sum/400


Fig 4-11 Result of Mean

Standard deviation will be computed by subtracted mean from image and taking their square root. Result is shown in Figure 4-12.

σ = (frame{1}-mean).^2


Fig 4-12- Geometric Mean

Minimum and maximum of value is computed by formula µ ± 3σ, result shown in Figure 4-13

C:\Users\Arslan\Desktop\max.jpg C:\Users\Arslan\Desktop\min.jpg

Fig 4-13 left to right (a) Minimum (b) Maximum

The values between minimum and maximum will be set to zero. Tracking will be achieved. Results are shown in Figure 4-15


Fig 4-15 left to right (a)

Chapter #5



FIG 5.1.1 images taken by HTC desire

Reference fig: