In the image processing techniques, this has been invented and discussed in several years ago had founded their way to entered to the real life fields, specifically the Intelligent transportation system (ITS). The most important project of the ITS is the Licence Plate Recognition. These Licence Plate Recognition system are generally utilised in applications, such parking security control of restricted areas, and automatic toll collection. So, while coming to world big cities, the traffic was increasing rapidly because of the rapidly increases in the vehicles, here the licence plate recognition was became most important digital image processing system these was used. The Licence plate recognition system has solved so many problems for the big cities in the world, which was very hard to control by the human in one day .
The licence plate recognition system are used in the traffic control management for the vehicle recognition for traffic abuse, such as when the vehicle entered into the restricted area without any permission, the street reserved for the public transport, breaking the limits of speed, crossing the red light signals without caring, etc. The licence plate recognition system was implemented in the real time application, this system was used in advance and new technique of the digital image processing, such that the character of licence plate was recognize by the pattern recognition. Many application in the previous and research would have some of the restricted on their working condition, that were limiting them to indoor scenes, different background, fixed illumination, and the fixed type of licence plate, the different drive ways, that limited vehicle speeds, and the range of the distance between camera and car. For the various light conditions, the outdoor scenes and no changes in the background many be two factors that influences the scenes the image quality that was acquired and the complexity of the technique needed. In the outdoor condition, changes the weather condition, the illumination changes and the passing objects (eg. vehicle clouds, and overpasses). In other words, the camera can create separate when it zoom or move. A active scene image would contain number of licence plates or no licence plate at all. And the licence plate was also show the different sizes and capturing at different position and orientation. It can be in a different background and also very complex background, the licence plate recognition system was became a quite challenging for the detecting the licence plate .
Get your grade
or your money back
using our Essay Writing Service!
From the information of security journal, the licence plate recognition system would also be used in the e-government system. E-government would apply licence plate recognition system for identification. In this recent year a lot of cameras had been installed by the government or police. This camera would recode a lot of images. If the police were installed the licence plate recognition system, the network will used to connect the camera direct to the police, when the image is captured, the image directly goes to the police office, the licence plate recognition will identify the number plates. These licence plate recognition will identify the all the information when the image was capture, time, area etc. Licence plate recognition system are provided e-government very convinces and efficiency . Next section Background tells the whole methods used for implementation of Recognition the characters.
1.2 LPR advantages and application:
The advantages of the Licence plate recognition system was that the all cars that are supposed to have the required identification for each and every car, which was the registration plate. So, that every cars has registration, that was registered by the government, government has all cars of database. These means there was no need to add an transmitter or special signs to the vehicles. The other advantage was to capture the driver's photo that was stored and retrieved for the evidence in the crime situations. The licence plate recognition has so many applications for allowing the car park entry for the detection of stolen cars. While in the car park situation the licence plate recognition system compares the number plate with the authorized users or the people with other member, this going to allow an entry if the match was found. That the information would contain in the database and also the recognized number plate was store in the time of entry. When user leaving the parking place with car, the car number would read again and that the system will calculate the both an entry and exit times. If the user lost the tickets in the long time parking, the licence plate recognition system will retrieve the lost ticket by checking the actual entry time. In the border control situation, the licence plate recognition use to double check the car licence plate . Here in the border each vehicle was to be registered in the central database with all the information about the car and the driver. And also all border crossing cars could be recorded automatically and which was used for law enforcement. The licence plate recognition system was also used for directing the traffics and the origination property entrance. The vehicles that which entering could have to read the number plate and will directed to the suitable lanes depending on their user rights as specified by the database. The complex licence plate recognition system could also used to monitor the traffics on the roads in order to detect the stolen cars or the cars with unpaid fines. After recognition the licence number plate these number plates are compared to the list of the database, if the match was found, the authorities were notified. The database contains the registration licence plate numbers of vehicles, which are linked to illegal activities that could be updated in real time provided effective assistance to the authorities. Such that in the every lane, the camera installed for the purpose of taking the photographs of cars .
Always on Time
Marked to Standard
Licence plate recognition system, consists in this project three main concepts: licence plate extraction, character segmentation and character recognition shown in the figure 1.1. The licence plate extraction contains the detection of the licence plate and also to extract the licence plate. The extraction was the process for determining the attributes as well as the properties that are associated with a region or object and that operates mainly on the distracted image information obtain through the segmentation. For the area of the image processing recognition of the characters, which could be involves an algorithm to implement the process for generating the set of the characters attributes from the binary image to present the features for the recognizing the desire character. In this project, the maximum part of the project was done based on the segmentation concept, the plate extraction and the character segmentation. When image was captured the image which is the source image, it is a colour image due to the light condition and illumination the reflection may occur in the image. The input of the Plate extraction image is RGB image, the idea of the Plate extraction is to extract the feature of the plate for further segmentation process. So the extracting the licence plate region was the combination of colours in the plate (background) and character (foreground) was a unique, this colours combination occurs in only in the number plate region . In the Plate Extraction algorithm, first the yellow region would find by using the CIE XYZ color spaces. Next method, the image Dilates yellow region, this image Dilation would identifies the yellow region better from the previous binary image. By grouping yellow region in separate filled components and emphasizing the separation between them, it could be useful for next step. The image will separates the picture in the connected component, the method would choose the region with a maximum area of the detected Licence number plate region, and this method will returns a rectangular area with minimum spacing of detected yellow region. Radon transform was used for deskewing the licence plate in horizontal position without losing the character, it returns the angle of the most visible one, generally the licence number plate region contains in the parallel lines, with the same licence number plate angle. By using the horizontal sum pixel, the licence plate will be extracted. Now , Equalization and quantization permits for obtained a gray scale image with improving contrast between digits and backgrounds, thus obtained better performance for the binarization process which moreover use adaptive threshold. At last in the plate extraction, number plate image will remove the noise in the image. Character segmentation was applied to the licence number plate to outline the individual character. It will give great accuracy of recognition. If the contour of the character is inaccurate, this lead to an error in the recognition stage and also it would fail to recognize the character. In this method mainly we focus on isolating character images and will describe our approach for grouping isolated images together there will one (or few ) such representatives. Isolates the character, by using the connected component labelling, then directly moves to driven approaches to recognition. Such recognition process does not make attempts for grouping character similar shapes together, they will move directly to the recognition of isolated character through image shape feature. The final stage for the licence plate recognition is the character recognition, at the stage of character recognition all the character should recognize. To match the character template matching was used. The template matching was one which we create a subimages of all the characters, that images were said to be a template images. After the character segmentation, then all the character of the licence number plate image should be equal in size, for the matching the character and the template image, the cross correlation concept was used, Correlation wan an effective technique for the character recognition, this correlation was developed by the Horowitz. The correlation method measure the same size of the image with the number of known image, the part of the images by correlation , the highest correlation between the images that produce the best match result. . The cross correlation function will measure the similarities or the shared properties between two data information. So finally the licence plate number will displays all the character of the licence plate image in output results.
1.4 Image Acquisition:
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
There are so many different ways for acquiring a photograph of the car and transferring to a computer for the further processing application:
- Capturing the image by using the Digital Camera.
- Capturing the image by using the conventional analogue camera and scanner.
- Recoding the image by using the video camera.
The second method was not appropriate for the licence plate recognition, it's an analogue camera and scanner because it does not give better quality image in the moving object. And the third option was the one that was used in the real life, so in the real time project it will recode the video sequence that connected to the computer for the further process. Finally the first method, we use for the project capturing the image with high resolution.
The image captured from the digital camera, the camera that shows sensor arranged in the form of 2-D array. In the array sensor format, numerous electromagnetic and few ultrasonic sensing devices were arranged. This was an leading arrangements found in the digital cameras. The camera contains CCD (Charged Coupled Devices), these are manufacture in a broad range of sensing properties and that was packed in rough arrays of 4000x4000 elements or more. All the digital camera was coming with the charged couple devises sensors and the charged couple devises sensor were used in light sensing instruments. The image was captured from the digital camera, it will have certain MP(Mega Pixel) resolution,
When image was captured at different light condition , the digital camera gives an image with JPEG format, which is Joint Photo Expert Group .
1.5 Joint Photo Expert Group (JPEG):
The JPEG group was organised in the year 1986, this was approved as ISO 10918-1 in 1994.The image which is in JPEG format, defined an image which was compressed in the stream of bytes, and the it decompressed back to the image. JPEG compression was used in the image file formats. JPEG was a most frequent image format used by the digital camera and also the other photographic image capture devices along with the JPEG or JFIF, it's an common format storing and transmitting photographic images for the world wide web. Theses JPEG format differences were often not well-known, and were simply called JPEG.
The JPEG compression algorithm was the best in photographs and paintings for realistic scene with smooth variation of tone and colour. For a web usage, in the image, where the bandwidth was used is important. JPEG was the most common format used for Digital cameras. JPEG was not suited for the drawing lines and textual or ironic, it can be observer, when in the image the sharp contrast was appeared between adjacent pixels. This kind of images are well stored in the lossless graphic format, such as TIFF,PNG and GIF. The image quality was usually lost when the image was been decompressed and recompressed, at particular time the image was cropped or shifted or if the encoding parameter was changed, the image will lost the quality as well. To avoid the degrading the quality of the image, the image is to be modified and it should be saved in the lossless format such as PNG .
1.6 Digital Image Representation:
An image may be defined as a two-dimensional function, f(x,y),where x and y are spatial(plane)coordinates, and the amplitude of f at any pair of coordinates(x,y)is called the intensity of the image at that point. The term grey level is used often to refer to the intensity of monochrome images. Color images are formed by a combination of individual 2-D images. For example, in the RGB color system, a color image consists of three (red, green, and blue) individual component images. For this reason, many of the techniques developed for monochrome images can be extended to color images by processing the three component images individually.
An image can be continuous with respect to the x- and y- coordinates an also in amplitude. Converting such an image to digital form requires that the coordinates, as well as the amplitude, be digitized. Digitizing the coordinate values is called sampling, digitizing the amplitude values is called quantization. Thus, when x,y, and the amplitude values of f are all finite, discrete quantities. We call the image a digital image.
1.7 Layout of the Licence Plate:
The licence plate consist different format in different cities, such as white plate with black characters and yellow plate with black character, in some cities like china , they have blue colour plate with white characters. And coming to the letter an number in the licence plate, different countries have different for on alphabets and numbers. In United Kingdom, the licence plate consist four alphabets and three numbers. All character will be written in black consist of white and yellow frames.
2.1 Plate Extraction:
Before segmenting the character of license number plate in the image, it was better to extract the license number plate. The idea for extracting the licence plate region was the combination of colours in the plate (background) and character (foreground) was a unique, this colours combination occurs in only in the number plate region . In the Plate Extraction algorithm, first the yellow region would find by using the CIE XYZ colour spaces. From the yellow region extraction algorithm returns a binary image which the yellow pixels are set, in the binary image, white pixels shows the yellow region. Next method Dilates yellow region of the binary image, this image Dilation would identifies the yellow region better from the previous binary image. The algorithm continues with grouping yellow region in separate filled components and emphasizing the separation between them, it could be useful for next step. Then the image separates the picture by using connected component, the method would choose the region with a maximum area of the detected Licence number plate region, this method will returns a rectangular area with minimum spacing of detected yellow region .
The next step , After detecting the Licence number plate with minimum spacing distance from the Licence number plate, the Radon transform was applied for finding lines in the image was used to return the angle of the most visible one, generally the licence number plate region contains in the parallel lines, with the same licence number plate angle. So the algorithm determines the angle of the plate and permits to rotate the licence number plate in the horizontal order without losing any character, by using the horizontal projection licence plate will able to extract the plate feature. The whole methods that was used for the plate extraction was showed in the block diagram figure 2.2.
2.1.1 Yellow Region Extraction:
“In 1666, Isaac Newton performed his famous experiment showing that white light can be separated by a prism to form a strip of light, named Visible spectrum.” This visible spectrum will include all the visible colours ranging from the red, orange, yellow, green, and blue to violet. This spectral colour was the visible spectrum in the basic components of white light. So from the general description of the Light, Light was the form of the electromagnetic radiation, it would describe in term of the wavelength (Î»). The visible spectrum of the colour ranges are from 380 nm to 780 nm for every small part of electromagnetic spectrum .
Colour science has produce in different branches of the colour stimuli, such as radiometer, colorimetry, photometry, psychophysics, colour vision. The colour science was a wide range of science that was associated by how a human being would perceive colour. The colorimetry were used in so many colour industries, these colorimetry was recently increasing in the Image Processing field due to wide spread of the of development. Achromatic (void of colour), this light attribute in its intensity, it a light what the viewer see in black and white. Chromatic light will spans the electromagnetic from approximately from 400 nm to 700 nm , the three basic concepts describe the chromatic light source that are Radiance, Luminance and Brightness. Radiance was the total amount of energy that flows from the light sources, it will measure in watts. Luminance was an amount of energy that an observer perceives from the light sources. And the Brightness was practically impossible to measure; it is one of the key factors to describe the colour sensitive.
Let us discuss how the human eye as a responsible in colour vision, in the detail experiment of 6 to 7 million cones in the human eye could be divided in their principal of sensing category. Approximately 65% of all cones sensitivity to Red (R) , 33% of all cones sensitivity to Green (G) and 2% cones sensitivity to Blue (B), the average absorption colours are shown in figure 2.3a. By this absorption of light, the colours are seen different variable combinations are so called primary colours Red (R), Green (G) and Blue (B) .
In 1931, Commission International deI'Eclairange (CIE) also known as CIE systems, the CIE system was design three primaries, X , Y and Z to replace red , green and blue (RGB) with positive weight to match all the colours . Let us discuss how the three RGB premiers were replaced to CIE XYZ. The CIE standard were design to use specific wavelength values to three primaries colours B=435.8 nm, G=546.1nm and R=700 nm, this standard were set by details of curve experimented by the figure 2.3a. These values are fixed for three RGB components will act alone, can generate all spectrum colours.
The standard primaries (RGB), were mixed in various intensities proportion, could produce all variables colours. The primaries colours will added produce to secondary colour of light magenta (red+blue), cyan (green+blue) and yellow (red+green). With mixing both primary and secondary light intensities will produce white light.
The characteristic generally used to distinguish one colour from another that are brightness, hue and saturation. As discussed early brightness would embodies the chromatic notion of intensity. Hue associated with a dominant wavelength in mixture of light waves, it represents dominate colour which as perceived by an observer (such hue colours are yellow, red and orange etc.). Saturation represents the purity or the amount of white light mixed with hue. The pure colours were fully saturated and which colours like lavender (mixed with violet and white), these colours are low saturated.
Hue and saturation are mixed together, it was said to be a chromaticity. So that colour may characterize by its brightness and chromaticity. To form a particular colour, the amount of red, green, and blue information was need, this is called Tristimulus value and were denotes XYZ respectively, a colour was specified by its trichromatic coefficient, defined as
x=XX+Y+Z ...... [2.1]
y= YX+Y+Z ...... [2.2]
z= ZX+Y+Z ...... [2.3] From the above equation x+y+z=1 ...... [2.4]
For any wavelength of light in spectrum the trisimulus values were needed to produce the colour corresponds to the wavelength.
In CIE, for specifying the colours the chromaticity diagram was used show in the figure 2.3 a. This chromaticity diagram shows the colour components as a function of x (red) and y (green), for any values of x and y the value of z(blue) is obtain by equation 2.4. i.e. z=1-(x+y). The chromaticity diagram indicates various colour spectrum from violet at 380 nm to red 780 nm, has an boundary of the tongue shaped, the standard CIE represent white light at center, the points which are located on the boundary us fully saturated, if the white light is added to any colour it becomes less saturated .
The input image of car is in the format of JPEG, which has colour information in RGB format. The values range in the image is from 0 which is unsaturated to 255 which is fully saturated. If the images were calibrated to the RGB primaries, there would be the simple matrix conversion was used to obtain XYZ tristimulus values. First the RGB values should be normalised by 255, so they range will be in 0 to 1 then the following matrix will be the conversion .
By using above matrix, the RGB image is converted to the CIE XYZ, now the range of yellow colour could found in RGB colour image in different lighting condition shows in table below .
Colour name Red Green Blue
Yellowish orange 231 224 0
Yellow 234 231 94
Greenish yellow 235 233 0
Yellow green 185 214 4
Yellowish green 170 209 60
the binary image, which show the yellow region identifies according to the range of possible colour combination of the given above colour table.
2.1.2 Morphological Operation:
Morphological operation was a tool for extracting image components that which are useful to representation and description of region shape. Basically there are two forms of image morphological processing, Binary and Gray scale. Binary morphological process operates upon binary images, in this paper the output of the yellow region extraction image will be in the binary image show in figure 2.4.
Binary image morphological processing work will be like spatial convolution group process. Spatial convolution process will computes resulting pixels brightness value based on the spatial frequency activity of neighbouring pixels. Spatial frequency is nothing but changing the brightness transition in the image, for high frequencies the rapid brightness transitions are prevalent, then for slow changing brightness transition at low spatial frequencies.
Spatial convolution was the method to calculate, what is going on, with the each pixel brightness around the pixel being processed. It was a mathematical method used in signal processing. So, the spatial convolution of each output pixel brightness will dependent on a group of input pixel surrounding the pixel being processed. By using the brightness information of the center pixels neighbours' spatial convolution calculates spatial frequency activity in the area. The spatial convolution process uses a weighted average of the input pixel and its immediate neighbours to calculate the output pixel brightness value, in the morphological process, instead of multiplying the pixel brightness by weight and summing the results, like spatial convolution process, these morphological process used set theory operation, such as intersection (AND), union (OR) and complement (NOT) operation was used to combined the pixel logically for resulting pixel value.
Input image for the further operation (dilation) is used to be binary image, the image was composed of the pixel that will have one of two brightness values either black (0) or white (255), the output image will be in the binary image format. Like spatial convolution, the morphological process will move across the input pixel by pixel, placing the resulting pixel in the output image . At each and every input pixel location, the pixel and neighbour are logically compared against the structuring element or morphological mask, for determining the output pixel logical value. The structuring element was analogous to the convolution mask used in spatial convolution, the structuring element was generally composed of square, rectangular and diamond dimension of size 3x3, 5x5 and something greater.
In this paper, by using the dilation operation the structuring element, that was used for the implementation is diamond shape. The diamond shaped structuring element shown in figure 2.5, which has certain distance from the structuring element origin to the point of diamond then certain distance have non-negative scalar integer.
The two most fundamental morphological operations were erosion and dilation. The dilation expands the size of the object in relation to their background and the erosion inverse of the dilation process, uniformly it reduce the size of the object. Erosion and dilation operation were used for eliminating small image objects features, such as noise spikes and ragged edges .
For an implementation of an input binary morphological process was commonly referred to as hit or miss transform. When the mask (structuring element) values match with the relative input pixels value, this evaluation is a “hit” otherwise, it would said to be “miss”. It is very convenient way to define numerous morphological operations. The object in binary image having white (1) pixel and the background as having black (0) pixel with this convention, the generalization of dilation would be
Let us consider for example the detection mask was
Where O(x,y) = 0 (black) for a hit
= 1 (white) for a miss
Here, this mask has the effect of adding a single pixel to the perimeter of a white object. The dilation operation has four different input pixel cases to consider:
Input pixel = 0, neighbour = 0 à hit; O(x,y) = 0
Input pixel = 1, neighbour = 1 à miss; O(x,y) = 1
Input pixel = 1, neighbour = mix of 0 &1 à miss; O(x,y) = 1
Input pixel = 0, neighbour = mix of 0 &1 à miss; O(x,y) = 1
The perfomation of dilation operation on a binary image white object glows in size. By doing this operation , the small features will enlarge, exaggerating their shapes, if dilation operation continues then white object will continue to expand and will able to fill the whole frame with white pixels. The dilation operation will useful for combining object that are divided because of clutter and junk in the image. The result of yellow region dilated was shown in figure 2.6.
2.1.3 Plate Selection:
By using the dilation operation, this operation permits the identification of yellow region better, ignoring for example hole (small dots in image), the grouping of the yellow region plate was used to separate by connected component and accentuating the separation between them, it will be very useful for proceeding steps.
Next, output binary image was applied to the connected component algorithm, the connected component, the output binary image of dilated operation will be decomposed 4 or 8- connected component will briefly describe in the 2.3.1 section, this filter removes small component and can keep the component which can require, should specify some limits. The algorithm depends on the bounding box and areas of the white pixels, it was very important to consider an accurate bounding box along with the area.
2.1.4 Edge Detection:
The edge detection was very important for the Radon transform, the main purpose of this edge detection was for finding the edges for the licence number plate and detecting the rectangular region angle by using the Radon transform horizontal and vertical lines. It was most common process for detecting the significant discontinuities. In this discontinuities category, and partition on was done in an image based on the abrupt changes in the intensities such that, the discontinuities in the image processing are detected by first order derivative was Gradient .
The Gradient of 2D function f(x,y) was defined as vector.
Æ’ = GxGy=âˆ‚Æ’âˆ‚xâˆ‚Æ’âˆ‚y 2.5
The magnitude of the vector is
âˆ‡Æ’ = magâˆ‡Æ’=[Gx2+Gy2]12
Æ’ â‰ˆ Gx2+Gy2 2.6
Using absolute value
Æ’ â‰ˆ Gx+Gy 2.7
The approximation still behaves or derivative, that why, they all are zero in the constant intensity area and the value were dependent to the degree of intensity change in area whose pixel value are variable.
The Gradient in vector was that point in the direction of maximum rate of change of Æ’ at coordinate (x,y). The angle at which this maximum rate
Î±(x,y) = tan-1GxGy 2.8
So, in this project report for detecting the edge detection, the canny edge detection was used for the edge detection, it is the very powerful edge detection that produces by the function edge. The image was smoothed by the Gaussian filter. The local gradient by the equation 2.7 and edge direction by the equation 2.8 were computed for each point. The edge point was the point, whose strength locally maximum in the direction of the Gradient. So the edge point will determine by the equation 2.7 and 2.8, these gives rise to the ridges in the Gradient magnitude image. The algorithm , where all the pixels which are top of the ridges will set to zero, the remaining, which are not at top of the ridge will not set to zero and gives the lines in the output, this process known as nonmaximal suppression. Here, the ridge pixel will be thresholds, using the two thresholds .
2.1.5 Radon Transform:
Due to the licence number plate position and direction with respect to the camera, the number plate might appear in the skewed manner. So, to correct the plate direction, the skew must be eliminated. To achieve this technique, Radon transform will determine the angle of the plate edge with respect to the vertical and horizontal for the given input edge image. The Radon transform was used to find the line on the image and returns the angle of the most visible line, and matrix representation of the largest lines on the image.
The radon transform was an important concept in the integral geometry, it deals with the problem of expressing the function on a manifold in term of integral at certain sub-manifolds. Theses Radon transform was able to transform two dimensional images with line, in order to domain of the possible line parameter. Each line in the image will occurs a peak positioned at matching line parameter. A projection of two dimensional function f(x,y) was set of line integral on certain direction. The radon function will computes the line integral from the multiple sources along parallel path or beams, in certain direction. These beams are spaced one pixel unit apart. To representing an image, the radon function takes multiple, the parallel beam projected on the image from the different angles by rotating the source around the center of the image, a single projection at specified rotation of angle were seen in figure 2.8 .
The two dimensional line integral of f(x,y) on vertical direction in projection of f(x,y) on x-axis and the two dimensional line integral of f(x,y) on horizontal projecton was the projection of f(x,y) on y-axis, the figure 2.9 shows horizontal and vertical projection of a simple two dimensional function.
Figure 2.9 the horizontal and vertical projection of simple two dimensional function (Left) and the method of Radon Transform (Right) 
Calculating the projection of a function along any angle Î¸ (00 to 1800), and then the Radon transform can be perform at any angle. Figure 2.9 shows the theory of the Radon transform. The formula of Radon transform gives in equation 2.9:
Radon transform was used to draw the parallel lines along the licence plate horizontal edge to detect the angle of them relative to horizontal and rotate the image with the angle then cropping the licence plate according to the parallel lines. The deskewd licence plate image was shown in figure 2.10, and the cropping the licence plate in image will discuss in next section.
2.1.6 License Plate Crop:
To extract the licence plate from the deskewed image, the segmentation process was used, the horizontal and vertical projection was obtained by summing the pixel intensities over the image columns and next it convolved with minimum size to crop the licence plate. The projection analysis was discussed brief in next section.
188.8.131.52 Projection Analysis:
The image, deskewed licence number plate was in the horizontal position, this number plate will yields by horizontal projection. So the image will decompose into segment at the break of horizontal projection.
In the binary image, the projection onto a line will be obtained by partitioning the line into bins and will find the 1 value pixels that were in lines perpendicular to each bin. Projections are compact representation of image most useful information was retained in the projection. Projection were not unique in the scene that more than one image would have the same projection. Horizontal and vertical projection can be easily obtained by finding the numbers of 1 pixels for each bin in vertical and horizontal projection .
Vertical projection V[i]: The vertical projection represents the local maxima of columns in the image. The vertical projection histogram was the frequency of white pixel for each column of the normalised licence plate.
Horizontal projection H[i]: The horizontal projection represents the local maxima of horizontal projection histogram. Horizontal projection histogram was the frequency of white pixel for each row of normalised licence plate.
The projection H[i] along the rows and projection V[i] along the coloumn of the binary image are given by
A general projection on to any line would be defined. The below equation shows the first moment of an image will be equal to first movement of projection, calculating the position of an object requires only the first movements the position could be computed from the horizontal and vertical projection.
The method computes the sum of the lines and of the columns in the image shown obtaining one vector for each direction as shown in the following figures (2.11),(2.12)
For these two directions, it computes the first point respectively at the left and the right side of the vector which is superior or equal to the average of this vector, thus obtaining a rectangle to be used for cropping the license plate.
The image obtained at the precedent step is very close to the license plate real contours. This step permits to eliminate local noises at the border of the plate, thus obtaining a precise license plate contour as shown in figure 2.13.
2.1.7 Plate Quantization and Equalization:
First, it will adjusts the image intensity values, calculates the histogram of the image and determines the adjustment limits “lower” and “higher” then, maps the values in the supplied intensity image to new values such that values between
Lower and higher will map to values between 0 and 1, values below lower map to 0 and above higher map to 1.
Then, it finds the optimal adaptive threshold corresponding to the intensity image obtained in the above mapping method. If the histogram of image was purely bimodal, then the threshold will take a value in the middle of the valley between the 2 modes (the logical election). In other difficult cases, when the modes are overlapped, the threshold will minimize the error of interpreting background pixels as objects pixels, and vice versa. This algorithm was a small version of more complex statistical method, offering good results and normally using a reduced number of iterations:
Initially each corner of the image has background pixels.
This provides an initial threshold calculated as the mean of the gray levels contained in the corners. The width and height of each corner is a tenth of the image's width and height, respectively.
The resulting image of the above discussed is shown in figure 2.14, while the input gray scale image appearing.
Finally, the method computes the binary image as shown in figure 2.15, obtained by applying the adaptive threshold obtained in the above step.
In order to perform the character recognition, we should be able to make the difference between digits and background inside the plate. Equalization and quantization permit to obtain a gray scale image with improved contrast between digits and backgrounds, thus obtaining better performance for the binarization process which moreover use adaptive threshold. This is crucial for the character recognition process.
2.1.8 Hough Transform:
The Hough transform (HT), was been named after the Paul Hough. The Paul Hough was originally unproved the method in 1962. It's a powerful global method for detecting the lines and edges. To detect the line or other boundary, such as curves, shapes it was transformed to the Cartesian space and a parameter space .
The Hough transform was a technique to recognise a unique guarantee for the shape and motion analysis in the image which contains noisy, missing some information and irrelevant data. The Hough transform algorithm based on the parameter extraction, while detection of shape it capture all the relative position and orientation of essential points of the shape and it will differentiate between the particular instance of the shape using the set of parameters the Hough transform identifies the specific values for these parameters and that the image points of the shape was to be found by comparison with predictions of the model instantiated at the identified parameter values .
The different images are used in the different areas, such as in the analysis of object recognition, it's an important to reduce an amount of data in the image, when maintainig the important characteristic, such has like in these project the licence number plate, the rectangular plate is an important information, which removes the other data. The edge detection will make to remove some amount of the data by edge detector, which was discussed in previous section. The Hough transform was mainly developed for the recognizing the line and then it has been generalized to cover arbitrary shapes .
The Hough Transform was a global method for finding straight lines which are hidden in larger amounts of other data. Finding lines was an important technique in image processing. So for detecting lines in image of licence number plate, the image should be binarised first,
Using some form of threshold this binaries was shown in section. The images of licence number plate have noticed by the line which was an edge of plate. By using binaries, and also the edge detect the edge will be clearly observed rectangle. In the Hough space each point (d,t) corresponds to the line at an angle T and distance D from the original of the data space. In the Hough space the value of a function gives the point density along with a line in the original data space. In the each and every point of the original data space will consider that all the line which goes through that particular point at a discrete set of an angle, a priori should be chosen. So for each angle T, the distance will also be calculated the line which has through the point at that point the angle and discretise that distance using the priori chosen discretisation, by giving the value d. In the Hough space, making a corresponding discretisation of the Hough space this will give in a set of boxes. These boxes are called Hough accumulator. In the Hough accumulator at the point (d,t), considering each line in the above discussion must be increment with an count starting with zero. After considering all of the lines through the point in the Hough accumulator, the high value in the Hough accumulator would probably corresponds to the line of the point and the angle of line also obtained. So, at last while rotating a angle will rectify the licence number plate .
184.108.40.206 Line detection using Hough transforms:
The Hough transformation was a standard tool in image analysis that allows recognition of global patterns in an original image space by recognition of local pattern in a transformed parameter space. It was particular useful when the patterns one was looking for thinly digitized hove “holes” and the picture are noisy. Especially in detected straight line in the licenses plate. The basic idea of this technique was to find curves that can be parameterized like straight lines in a suitable parameter space . This was a special approach for finding the all lines in an image with n points, that would form all lines in a pair and it would check the remaining points that was located near the lines.
First, finding all the lines that determined by every pair of point and then should find all separation of points that were close to the particular line, finding n(n-1)/2â‰ˆ n2 lines and to perform n(n(n-1))/2 â‰ˆ n3 comparison of each and every points of the lines.
From the Hough transform, let us consider a point (xi ,yi), it pass through all the lines. So, many lines pass through the point (xi ,yi), that all the lines were satisfies the slope intercepts equation, for few values of a and b.
yi=axi+ b 2.15
b= -xia+yi 2.16
Expressing the equation 2.15, gives the equation 2.16 , a b plane (also called parameter space) that yields to the equation 2.16 of a single line for a fixed pair (xi ,yi), and also has the parameter space associated with it, so the principle of the parameter space line was corresponded to whole image points (xi ,yi) that was shown below.
The image lines could be indentifies by the large number of parameter space line intersects. ‘a' represents the slop of the line , that approach infinity, when the line approve the vertical direction. The normal representation of line
Î¸ Parameter was the angle between the normal line and the x-axis and Ï parameter was the perpendicular distance between the line and the origin. The point in the image that corresponds to the line in the parameter space, that form shown in equation 2.17 points corresponds to the sinusoidal curve in the ÏÎ¸ value.
Figure 2.17 Example Image and corresponding Hough Transform
From the figure 2.18, the range Î¸ was -90 degree to 90 degree and for an image with in a resolution of wxh, the range of Ï was 0 to w2+h2. From the above figure , shows two points (x1 ,y1) and (x2 ,y2), the parameter of these point of intersection in space corresponded to the parameter of the dashed lines between the two points in the original image .
The main goal of the Hough transform was to identify points in parameter space, where so many number of curves , this corresponds to an equal amount of collinear points in the original image, This problem was to quantize the parameter space. So the result of rectangular licence plate region is called accumulator cells and each cell corresponds to the single line in the image.
For the Hough transform algorithm, first the accumulator array was cleared to zero then each point in the edge image iterate over all possible value of Î¸ and compute Ï using the equation 2.17, finally after computed (Ï,Î¸) value, should increment the corresponded accumulator cell by one. For the higher resolution image, the accumulator array yields more accurate results, it requires the higher memory and increases the processing time. Actually the full time accumulated was not used in practice because the horizontal and vertical lines were considered. So computing the Hough transform only in a small interval around either 0 or 90. From the algorithm, it has iterated all the points , so that the accumulator cell contain all the number of points for a particular lines, so for finding the lines was the matter of searching the accumulator array for local maxima .
220.127.116.11 Line Segment Plate Extraction:
The Hough transform detects the lines in the image that opposed to the line segment detection. Now the algorithm makes for extracting the line segment. Checking which point in the image contributes to which lines.
For, finding the line segment was a matter of searching for the lines segment in the accumulator cell, this was done by storing the all points in their own position along with lines and then should group the points into the line segment. Here, it was known that all the points order grouping was also done by thee looping algorithm over all points and should check the points into the line should have some maximum distance. If the line segment was extended by including the new points or a new link segments was started. By doing the implementation, the maximum distance of the licence number plate was chosen that gives the best result, the minimum line segment was chosen for very short line segment, but it reduce computation.
2.2 Character Segmentation:
In the licence number plate recognition, when the licence plate was extracted from the image, which was seen in the above section. In this section we have to find all the characters in the licence plate and they recognize. Character segmentation was applied to the licence number plate to outline the individual character. It will give great accuracy of recognition. If the contour of the character is inaccurate, this lead to an error in the recognition stage and also it would fail to recognize the character.
Most of the current research for the licence number plate will deal with the segmentation. This method was used to help in character segmentation. This character segmentation includes the information, such as size of the plate, size of the each character and size of the interval between the characters in the licence plate .
“Image Segmentation is one of the most important steps leading to analysis of processed image” . The main goal of the segmentation is to divide an image into parts, that will have strong correlation with an object or for further matching process in the recognition section. Aim for a complete segmentation, this result in a set of disjoint region corresponding uniquely with an object (characters) from the input image. There is another process, for partial segmentation, in this region do not corresponds directly with an image object. The goal of partial segmentation was image divides in to separate region that were homogenous with respect to a chosen properties, such as brightness, colour, reflectivity and texture. Segmentation method was divided into three groups according to document feature that employ. First global knowledge, was an about, image are its parts, the knowledge is usually represented by histogram of image feature. Then the second group was Edge base segmentation and Region based segmentation was a third group. There are so many different characteristics may be used in edge detection or region growing, for an example brightness, texture and velocity field etc. Each region could be represents by its closed boundary, and each closed boundary describes a region .
When the licence plate was binarized at minimum resolution, it should be clean and had a textual region located. It was ready to segment individual character of interest could be extracted and subsequently recognized. In this method mainly we focus on isolating character images and will describe our approach for grouping isolated images together there will one (or few) such representatives. Isolates the character, by using the connected component labelling, then directly moves to driven approaches to recognition. Such recognition process does not make attempts for grouping character similar shapes together, they will move directly to the recognition of isolated character through image shape feature. Before discussing about the connected component analysis, let us see some basic definitions for connected component .
In a Digital Image, a pixel was spatially close to the several other pixels. Where as in the digital image the pixel were represented on a square grid, a pixel will have a common boundary with four pixels and shares a corner with four additional pixels. Whereas that would like to say two pixels are 4-neighbors, when they share with common boundary. Similarly, two pixels were 8-neighbors if they share at least one corner. Let us take for an example, the pixel at location [i,j] had 4-neighbors [i+1,j],[i-1,j],[i,j+1], and [i,j-1]. For the 8-neighbors of the pixel, which will include 4-neighbors plus [i+1,j+1],[i+1,j-1],[i-1,j+1] and [i-1,j-1]. That the pixel would said to be 4-connected to its 4-neighbours and 8-connected to its 8-neighbors show in figure 2.19 and 2.20.
Figure 2.19 4-neighbors [i+1,j],[i-1,j],[i,j+1], and [i,j-1]
Figure 2.20 8-neighbors [i+1,j+1],[i+1,j-1],[i-1,j+1],[i-1,j-1] plus all of the 4-neighbours.
A path from the pixel at [i0,j0] to the pixel [in,jn] was a sequence of pixel indices [i0,j0], [i1,j1], [i2,j2],....., [in,jn] such that the pixel at the [ik,jk] was the neighbour of the pixel at [ik+1,jk+1] for all the k values with 0