Image processing and analysis is an exciting part of modern cognitive and computer science. This could be seen during the years of seventies and eighties where the active application was growing significantly. This progress can be seen in an increasing number of software and hardware regarding digital image processing on the market.
Digital image processing method had been introduced in 1920 across the Atlantic Ocean when people were interested in transmitting pictures. The scientists and engineers started in getting involve in the transmission of picture information by devising various techniques to enhance the visual quality of the pictures. This was the beginning for the introduction of the image processing method.
Unmanned aerial vehicle is also known as UAV which is widely used in the area of reconnaissance and surveillance because of its low price and compact size, and most importantly, it stability to carry out dangerous missions without human pilots aboard. The function of an UAV is to collect aerial video and capture image. Commonly an UAV is used to spy the area from air by collecting video and image data.
Get your grade
or your money back
using our Essay Writing Service!
In 21st sanctuary UAV provides a platform for intelligent monitoring in application domains ranging from security and military operations to scientific information gathering. Since vision has the modality for object detection and recognition, the combination of computer vision and landing control has become the main attraction for researches on UAV.
Objective of the project
Every project would have the objective of the project and the objectives for this project are:
To process the image data from the UAV and improve the image qualities.
To create a graphical user interface (GUI) by using the MATLAB software to make the system more user friendly.
To identify the objects in the image data.
Captured image that had been sent from UAV to the user have certain problem such as identifying the objects in the image. The image process by the user is blurry. Many causes had been taken into consideration such as light, wind and motion from the environment. The user must enhance the image so that the object in the image can be identifiable by reducing the image blur and noise.
Scope of Works
This project would involve the understanding of digital image processing and basic understanding of the UAV such as how the UAV functional. In the digital image processing more research like types of image file and types of filter that been use should be taken into account and other related topics regarding digital image processing should be reviewed. Besides that, this project required to use the MATLAB software by study more about the software and looking through samples from the website to gain more knowledge. By reading a lots of paper reviewing of journals and magazines and etc. are needed in this project to understand the project and for references in the report.
Outline of Thesis
This thesis is divided into 5 chapters. Chapter One gives an overview and briefly introduction of the project. Chapter Two covers the literature review more details on Unmanned Ariel Vehicle and about the Digital Image Processing. Chapter Three presents on the methodology of the project which is consist of flow chart, design and implementation of the project. Chapter Four discusses and analyses the result obtained. Chapter Five includes application conclusion of the project and recommendation for future development of the project
2.1 Digital Image Processing
Image processing is any form of information processing for which the input is an image, such as photographs or frames of picture. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
Digital image processing is the use of computer algorithms to perform image processing on digital images. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing. Other problems which might arise are geometric transformations such as enlargement, reduction, and rotation colour corrections such as brightness and contrast adjustments, quantization, or conversion to a different colour space, registration (or alignment) of two or more images; combination of two or more images.
2.2 Image storage format
Always on Time
Marked to Standard
Generally digital images are stored using the bitmap format. Bitmap known as a bit-mapped image that can describe the colour or intensity of pixels of an image by using binary bits and the information of the image are been stored in computer. Bitmap mostly representing any features of image details, and it can reflect effectively the change of brightness and darkness, complicated scenes and colour that show vivid images. The disadvantages of bitmap files are usually too large and fidelity may be reduced and sawtooth may appear when zooming image out or in. (Meiqing Wang & Choi-Hong Lai, 200)
It is different from vector graphics, which is described by using lines, points and planes in graph processing. Vector graph consists of many types of elements. These elements getting from by using certain geometrical formulae that causes the drawings are usually small files. The advantages of vector graphics are image will not be distorted during zooming in or out and when rotation. The disadvantages is that difficult to show the effect of rich colour levels of the living image. Image that have shapes such as illustrations and line drawing and free zoom logos and words are often suited for a vector formats.
2.2.1 The Bitmap (BMP) Format
BMP is the abbreviation of the bitmap. BMP file storing format of an image in bitmap is suffix.bmp and been developed by MicrosoftÂ® and it is the standard image format for Windows. This format normally support in all image-processing software packages running in the Windows operating system. The bitmap array records the colour values in the RGB model at each pixel of the image and if the image is not a true colour, then palette is to be used. (Meiqing Wang & Choi-Hong Lai, 200)
2.2.2 The Graphics Interchange Format (GIF)
It is the abbreviation for graphics interchange format and has a format file of suffix.gif. Some key features that make common and valuable format for the Internet are included in the format file. The features include the high compression ration and storage of multiple images within a single file. But the maximum storage capacity of each pixel is 8-bit that have only maximum of 256 colours can be referenced for a single GIF image. So GIF format are commonly used or graphics and image with a few colour such as black-and-white and buttons photos. (Meiqing Wang & Choi-Hong Lai, 200)
2.2.3 The RAW format
RAW is usually used to keep record of electronic level produced when image sensors transform light signals into electric signals. RAW is a file with the suffix.raw and a typical RAW file contain unprocessed and uncompressed pixel data. Before being processed by common image-processing software, the image in the RAW format must to be converted by using conversion software provided into common image format.
Each image pixel in RAW format files only recorded information without a header containing information of the size of the image. RAW files are easy to read in array or some other data structure for processing then process image in RAW format file.
2.2.4 The Joint Photographic Experts Group (JPEG) format
The most popular format that been used for image storage and display image are in the JPEG format file which have a file suffix.JPEG. JPEG is stand for Joint Photographic Experts Group. The JPEG image file uses the standard JPEG image for encoding. The compression algorithm between JPEG files and BMP files are different. JPEG is loss compression algorithm that loses some information when been decoding. But BMP format uses run-length encoding that have a less loss of compression algorithm. (Meiqing Wang & Choi-Hong Lai, 200)
In computing, grayscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any colour, or even coded with various colours for different intensities. Grayscale images are distinct from black-and-white images, which in the context of computer imaging are images with only two colours, black and white and have many shades of gray in between. In most contexts digital imaging the term "black and white" is used in place of "grayscale". For example, photography in shades of gray is typically called "black-and-white photography". The term monochromatic in some digital imaging contexts is synonymous with grayscale, and in some contexts synonymous with black-and-white.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Grayscale images are often the result of measuring the intensity of light at each pixel in a single band of the electromagnetic spectrum (e.g. visible light). Grayscale images intended for visual display are typically stored with 8 bits per sampled pixel, which allows 256 intensities to be recorded, typically on a nonlinear scale. The accuracy provided by this format is barely sufficient to avoid visible banding artefacts, but very convenient for programming. Medical imaging or remote sensing applications, which often require more levels, to make full use of the sensor accuracy (typically 10 or 12 bits per sample) and to guard against round off errors in computations. Sixteen bits per sample (65536 levels) appears to be a popular choice for such uses.
To convert any colour to its most approximate level of gray, the values of its red, green and blue (RGB) primaries must be obtained. There are several formulas that can be used to do the conversion. One of the accurate conversion models is the luminance model, which takes the average of all three colour components. These percentages are chosen due to the different relative sensibility of the normal human eye to every of the primary colours from higher to the green and from lower to the blue. (Milan Sonka,Vaclav Hlavac & Roger Boyle, 200)
2.4 Image Noise
Most of the image that been capture are affected by certain degree of noise. The noise will corrupt the true measurement of the signal at the output image. Image noise can be expressed as:
Where I(t) is the output ,S(t) is the original data and the N(t) is the noise by the environment and other sources.
Figure 2.1 : Signal and Noise
There are many types of noise and each single noise have resulting a different output when the input image have be corrupt by the different types of noise.
2.4.1 Gaussian Noise
The noise of Gaussian Noise density follows a Gaussian normal distribution
G( xÌ…, Ïƒ)
Where can be defined by the mean xÌ… and standard deviation Ïƒ. It also can be used to add noise to a volume. This can be useful to evaluate filtering or segmentation algorithms.
Figure 2.2 :the effect of standard deviation (Ïƒ) for Gaussian Noise.
2.4.2 Salt and Pepper Noise
Salt and Pepper noiseÂ are commonly seen on the blur images. It is occurring white and blackÂ pixels in random. The usage of aÂ median filter or aÂ contra harmonic mean filter is the effective way to remove this noise. Salt and pepper noise creeps into images in situations where quick transients. The figure below show the example of Salt and Pepper noise image. (http://www.mathworks.com/products/image/)
Original image (b) Salt and Pepper Noise
Figure 2.3: the original image after been corrupt by noise
In image processing filtering is an important feature. Most of the image enhancement work in digital image processing is based on filtering. Image filtering is used for noise removal, contrast sharpening and contour highlighting. Filters can be use to tuned the contrast and brightness of the image. Here are the some types of filter in the image processing:
2.5.1 Wiener Filtering
The most important technique for removal of blur in images due to linear motion or unfocussed optics is the Wiener filter. From a signal processing standpoint, blurring due to linear motion in a photograph is the result of poor sampling. Each pixel in a digital representation of the photograph should represent the intensity of a single stationary point in front of the camera. Unfortunately, if the shutter speed is too slow and the camera is in motion, a given pixel will be an amalgam of intensities from points along the line of the camera's motion. Weiner filters are far and away the most common deblurring technique used because it mathematically returns the best results. Inverse filters are interesting as a textbook starting point because of their simplicity, but in practice Wiener filters are much more common. It should also be re-emphasized that Wiener filtering is in fact the underlying premise for restoration of other kinds of blur; and being a least-mean-squares technique, it has roots in a spectrum of other engineering applications. (Duane hanselman, Bruce Littlefield, 2005)
Each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative by the median filter for considers of its surroundings. Instead of simply replacing the pixel value with theÂ meanÂ of neighboring pixel values, it replaces it with theÂ medianÂ of those values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. This method is generally very good at preserving edges.
Constrained Least Squares Filtering
The power of the under graded noise and image must to be known. Constant estimate of the ratio of the power spectra and not a good ways for a good solution. The method requires knowledge of only the mean and variance of the noise. These parameters usually can be calculated for given degraded image. The algorithm presented in this section has the notable feature that it yields an optimal result for each image to which it is applied. It is important that these optimality criteria, while satisfying from a theoretical point of view, are not related to the dynamics of visual perception. (http://www.mathworks.com/products/image/)
2.6 Image edge detection
Edge detection is focus on the process of locating and identifying sharp discontinuities in an image that are abrupt changes in pixel intensity. The method of edge detection that been use before involving the image of 2-D filter operator that sensitive to large gradients in the image with zero value returning. There are lots of type of edges operator include edge orientation, edge structure and noise environment. There are so many ways to perform edge detection and the most method that been use are; (http://www.aquaphoenix.com/lecture/matlab10/page3.html)
Gradient based Edge Detection
The image Gradient based edge detection in first derivative consists of detecting minimum and maximum. However, these first derivative edges are not the only features maximizing or minimizing. Acquisition or reconstruction noise, digitalization and spurious local textures induce undesirable discontinuities in the intensity function.
Laplacian based Edge Detection
This method searches for zero crossing in the second derivative of the image for finding the edge that has the one-dimensional shape of ramp and calculating the derivative of the image location.
2.6.3 Edge Detection Techniques
There are various types operator for edge detection techniques that can be use and each of the operators has its own ability. The operators are:
Sobel operator consists 3x3 pair of convolution kernels as shown below.
Figure 2.4: Masks used by Sobel operator
Kernels are designed to maximally respond to edge running horizontally or vertically relative to the pixel grid and can be applied separately to the input image to produce separate measurements of the gradient component. The gradient magnitude is given by :
Robert's Cross Operator
The Roberts Cross operator perform a 2-D spatial gradient measurement with a simple and quick to compute on the image. Each point in the output pixel value represents the estimated absolute magnitude of the input image. It consists of 2x2 pair of convolution kernels as shown below :
Figure 2.5: Mask used for Robert operator
2.6.4 Canny Edge Detection
Edge detection is also known as the process of finding sharp contrasts in intensities in an image. These processes significantly decrease the amount of data in the image, while preserving the most important structural features of that image. The ideal edge detection algorithm for images that are corrupted with white noise is considered to be use the Canny Edge Detection.
Canny also introduced the notion of non-maximum suppression, which means that given the presmoothing filters, edge points are defined as points where the gradient magnitude assumes a local maximum in the gradient direction. TheÂ Canny edge detectorÂ is still a state of the art edge detector. Unless the preconditions are particularly suitable, it is hard to find an edge detector that performs significantly better than the Canny edge detector. (http://www.mathworks.com/products/matlab/description1.html, 1994-2010)
The Unmanned Ariel Vehicle (UAV)
UAVs are highly capable unmanned aerial vehicles flown without an on-board pilot. These robotic aircraft are often computerised and fully autonomous. UAVs have unmatched qualities that make them the only effective solution in specialised tasks where risks to pilots are high, where beyond normal human endurance is required, or where human presence in not necessary.
2.7.1 Classification by Performance Characteristics
UAV can be classified by a broad number of performance characteristics. Aspects such as weight, endurance, range, speed and wing loading are important specifications that distinguish different types of UAV and give rise to useful classification systems. Classification by performance characteristics is useful for designers, manufacturers and potential customers because it enables these groups to match their needs with the performance aspects of UAV and the important performance characteristic for the UAV are given below. (Arjomandi, 2006)
220.127.116.11 Weight Of The UAV.
Classification by performance characteristics is useful for designers, manufacturers and potential customers because it enables these groups to match their needs with the performance aspects of UAVs.
Table 2.1 : Weight classification
Classification by Weight
200 - 2000 kg
50 - 200 kg
5 - 50 kg
18.104.22.168 Maximum Altitude
The maximum operational altitude, or flight ceiling, is another performance measure by which UAVs can be classified. This is also useful for designers or choosing a UAV to purchase so the customer can select a UAV that meets their altitude needs. Some UAVs in military situations are required with low visibility to avoid being detected and destroyed by the enemy therefore high altitude is an important requirement. Also for imaging and reconnaissance a high altitude is required to obtain images of the maximum amount of terrain. (Arjomandi, 2006)
Table 2.2 : Maximum Altitude Classification
Classification by Maximum Altitude
1000 - 10000 meter
Type of UAV
There are many type of UAV that been use for different types of mission. This are the few example of the UAV and the information about the UAV model that have now days.
Draganflyer X4 Helicopter
Figure 2.6: Draganflyer X4 Helicopter
Building on a proven and successful design, the Draganflyer X4 UAV features a four rotor configuration. Ideal for small unmanned aircraft, the aerodynamics of this design work to give you the best stability, and the best flight performance possible. Some of the recent innovations featured in the X4 are rugged carbon fiber and injected nylon parts, advanced stabilization software, brushless motors and a lithium polymer power source.
A quad rotor design provides extremely favorable flight characteristics, because of inherent stability. Because each pair of rotor blades spins in opposite directions, they cancel out any torque rotation caused by the spinning rotor blades momentum, keeping the helicopter flying straight and true. Besides making the helicopter easier to fly, counter rotating propellers increase efficiency and flight times, because no extra thrust is needed to compensate for unwanted rotation. (http://www.draganfly.com/uav-helicopter/draganflyer-x4/features/)
Draganflyer X6 Helicopter
Figure 2.7: Draganflyer X6 Helicopter
The Draganflyer X6 helicopter utilizes an innovative six-rotor design. The six rotors are arranged as three counter-rotating offset pairs mounted at the ends of the three arms, with matched sets of counter-rotating rotor blades. Differential thrust from these three equally spaced points make the Draganflyer X6 helicopter able to maneuver quickly and precisely. The offset layout increases the thrust without increasing the size of the footprint, and naturally eliminates loss of efficiency due to torque compensation.
The Draganflyer X6 is small enough to fly indoors where other full-size helicopters or airplanes can't, yet is large enough to fly outdoors. Not only does the unique design maximize thrust, it also minimizes sound output. Because the rotor blades are designed for maximum efficiency, they naturally produce less turbulence when spinning. And because the motors direct drive the rotors, no noisy gearing is required. At hover the Draganflyer X6 produces less than 65dB of sound at one meter, and less than 60dB at three meters. (http://www.draganfly.com/uav-helicopter/draganflyer-x6/features/)
In this chapter, design and implementation in order to implement this project are explained. The explanation is provided with the flow charts and others related figures. The software using and the functions applied are also included in this chapter. The main flowchart of project is shown in Figure 3.1.
3.1 Project Overview
The project is dividing into six stages which are building the image received, grayscale image, filter image, edge detection and lastly display the object identity in the image. Figure 3.1 shows the step started to identify the object in the image.
The steps begin with received the aerial image that been capture from the UAV through the communication connection that been setup. The image will be converting from true colour to grayscale. Then image will be filter to remove the noise when the image been capture. After that the image will be edge detect. Finally the object in the image will be identified and the result will be display.
Figure 3.1: Flow Chart
3.2 Software Implementation
This section will discuss about the software that been use throughout the project which using the image-processing. The software that been use is the MATLAB that is one of the suitable software for this image processing project. The MATLAB involving the some coding and also the graphical user interfaces to develop a program that can detect the object in the image.
3.2.1 MATLAB Software
MATLABÂ®Â is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. Using the MATLAB product, user can solve technical computing problems faster than with traditional programming languages, such as C, C++, and Fortran.
There are many applications in using MATLAB such as signal and image processing, communications, control design,Â test and measurement, financial modelling and analysis, and computational biology. Add-on toolboxes in the collections of special-purpose MATLAB functions are available separately and extend the MATLAB usage to solve particular classes of problems in these application areas.
The work may be easier because MATLAB also provides a number of features for documenting and sharing. MATLAB code can be integrated with other languages and applications, and distribute userr MATLAB algorithms and applications.
3.2.2 Image Processing Toolbox
Image Processing Toolbox softwareÂ provides a complete set of reference-standard algorithms and graphical tools for image processing, analysis, visualization, and algorithm development. Image Processing Toolbox can be use to restore noisy or degraded images, enhance images for improved intelligibility, extract features, analyze shapes and textures image. Usually the toolbox functions are written in the open MATLABÂ®Â language, giving the user the ability to inspect the algorithms, modify the source code, and create userr own custom functions.
22.214.171.124 Display and Exploring Image
Image Processing Toolbox extends MATLAB graphics to provide image display capabilities that are highly customizable. User can create displays with multiple images in a single window, annotate displays with text and graphics, and create specialized displays such as histograms, profiles, and contour plots.
In addition to display functions, the toolbox provides a suite of interactive tools for exploring images and building GUIs. User can view image information, zoom and pan around the image, and closely examine a region of pixels. User can interactively place and manipulate ROIs, including points, lines, rectangles, polygons, ellipses, and freehand shapes. User can also interactively crop, adjust the contrast, and measure distances. The suite of tools is available within Image Tool or from individual functions that can be used to create customized GUIs.
To read an image, use the 'imread' command. The example reads one of the sample images included with the Image Processing Toolbox, pout.tif, and stores it in an array named I.
Table 3.1: Coding for read and view the image.
I = imread('nut.jpg');
Figure 3.2 : The view of the image
126.96.36.199 Convert RGB to Grayscale
Convert RGB image or colormap to grayscale meaning that truecolor imageÂ RGB is been convertÂ to the grayscale intensity image 'I'.Â Rgb to grayscaleÂ converts RGB images to grayscale by eliminating the hue and saturation information while retaining the luminance.
Table 3.2: Coding for convert rgb to grayscale.
J= rgb2gray (I);
Figure 3.3 : The grayscale image.
188.8.131.52 Image Filtering
Filtering of images either using correlation or convolution can be performed using the toolbox function 'imfilter'. This example filters an image with a 5-by-5 filter containing equal weights. Such a filter is often called an averaging filter. The type of filter that have in this section are:
184.108.40.206.1 Median Filter
The Median filtering is a nonlinear operation often used in image processing to reduce "salt and pepper" noise. When the goal is to simultaneously reduce noise and preserve edges the median filter is more effective than convolution.
Table 3.3: Coding for median filter
I = imread('eight.tif');
J = imnoise(I,'salt & pepper',0.02);
K = medfilt2(J);
imshow(J), figure, imshow(K)
Figure 3.4: The image added with 'salt and paper' noise
Figure 3.5: Image after been filter with the median filter
220.127.116.11.1 Weiner Filter
The wiener2 function applies a Wiener filter (a type of linear filter) to an image adaptively, tailoring itself to the local image variance. Where the variance is large, wiener2 performs little smoothing. Where the variance is small, wiener2 performs more smoothing.
This approach often produces better results than linear filtering. The adaptive filter is more selective than a comparable linear filter, preserving edges and other high-frequency parts of an image. In addition, there are no design tasks; the wiener2 function handles all preliminary computations and implements the filter for an input image.
wiener2, however, does require more computation time than linear filtering. wiener2 works best when the noise is constant-power ("white") additive noise, such as Gaussian noise. The example below applies wiener2 . (http://www.mathworks.com/products/image/)
Table 3.4: Coding for Wiener Filter:
I = imread('ter.jpg');
K = wiener2(J,[5 5]);
Figure 3.6: Original image.
Figure 3.7: Image after been filter with the wiener filter
18.104.22.168 Edge Detection
In an image, an edge is a curve that follows a path of rapid change in image intensity. Edges are often associated with the boundaries of objects in a scene. Edge detection is used to identify the edges in an image.
To find edges, user can use theÂ edgeÂ function. This function looks for places in the image where the intensity changes rapidly, using one of these two criteria that is the first one is places where the first derivative of the intensity is larger in magnitude than some threshold and the second one is places where the second derivative of the intensity has a zero crossing.
EdgeÂ provides a number of derivative estimators, each of which implements one of the definitions above. For some of these estimators, user can specify whether the operation should be sensitive to both horizontal and vertical.
The best edge-detection method thatÂ edgeÂ provides is the Canny method. The Canny method differs from the other edge-detection methods in that it uses two different thresholds that is use to detect strong and weak edges, which is includes the weak edges in the output only if they are connected to strong edges. This method mostly less than the others to be fooled by noise, and more effective in detect the weak edges.
The following example illustrates the power of the Canny edge detector by showing the results of applying the Sobel and Canny edge detectors to the same image:
Table 3.5 : Coding for Sobel and Canny filter
Figure 3.7 : Sobel Filter
Figure 3.8 : Canny Filter
The toolbox includes two functions user can use to find the boundaries of objects in a binary image.TheÂ bwtraceboundaryÂ function returns the row and column coordinates of all the pixels on the border of an object in an image. User must specify the location of a border pixel on the object as the starting point for the trace. TheÂ bwboundariesÂ function returns the row and column coordinates of border pixels of all the objects in an image.For both functions, the nonzero pixels in the binary image belong to an object and pixels with the value 0 (zero) constitute the background. (http://www.mathworks.com/products/image/)
The following example usesÂ bwtraceboundaryÂ to trace the border of an object in a binary image and then use bwboundariesÂ to trace the borders of all the objects in the image first, Read image and display it. Then, convert the image to a binary image because bwtraceboundary and bwboundaries only work with binary images. After that, determine the row and column coordinates of a pixel on the border of the object user want to trace. bwboundary uses this point as the starting location for the boundary tracing. Then call bwtraceboundary to trace the boundary from the specified point. As required arguments, user must specify a binary image, the row and column coordinates of the starting point, and the direction of the first step. The example specifies north ('N'). Then display the original grayscale image and use the coordinates returned by bwtraceboundary to plot the border on the image. After that, trace the boundaries of all the coins in the image, use the bwboundaries function. By default, bwboundaries finds the boundaries of all objects in an image, including objects inside other objects. In the binary image used in this example, some of the coins contain black areas that bwboundaries interprets as separate objects. To ensure that bwboundaries only traces the coins, use imfill to fill the area inside each coin. Finally, plot the borders of all the coins on the original grayscale image using the coordinates returned by bwboundaries. (http://www.mathworks.com/products/image/)
3.2.3 Graphical User Interfaces (GUI)
A user interface is the point of contact or method of interaction between a person and a computer or computer program. It is the method used by the computer and the user to exchange information. The computer displays text and graphics on the computer screen. The user communicates with the computer by using input device such as a keyboard and mouse.
A GUI incorporating graphics objects such as windows, icons, buttons, menus and text. Selecting or activating these objects in some way usually causes an action or change to occur. The most common activation method is to use a mouse to control the movement of the pointer on the screen and to press a mouse button to signal object selection or some other action. (Duane hanselman, Bruce Littlefield, 2005)
However, GUIs are harder for the programmer because a GUI-based program must be prepared for mouse clicks or possibly keyboard input for any GUI element at any time. Such inputs are known as events, and a program that responds to events is said to be event driven.
Finally, there must be some way to perform an action if a user clicks a mouse on a button or types information on a keyboard. A mouse click or a key press is an event, and the MATLAB program must respond to each event if the program is to perform its function. For example, if a user clicks on a button, that event must cause the MATLAB code that implements the function of the button to be executed. The code executed in response to an event is known as a call back. There must be a callback to implement the function of each graphical component on the GUI. Figure 3.4 and figure 3.4 below will show the example of the GUI in the MATLAB.
Figure 3.9 : Step to create the GUI functional.
Figure 3.10: The application of the GUI.
For this project GUI in the MATLAB software is been use as the implement of the software. The main reason for use GUI is because it will make this software easy to use. User can just click the mouse to apply any function that they need in the GUI for detecting the object in the image.
The basic idea for this project to work and the approaches implementation are shown in the work flow in figure 3.6 .
Figure 3.11 : Work Flow
Generally, this project is to develop an algorithm for a UAV application in receiving image and object identification. This algorithm will successfully implement using MATLAB integrated development environment. As a result, the algorithm is able to identify the object in the image as long as the image not very highly blurry.
The input of this project will be the image that been receive which were captured via the camera from the UAV. The first step of algorithms the project is to process the image that been receive to get a clear view of the object in the image. The image receive is originally in red-green-blue (RGB). To ease computation, then RGB image are converted to grayscale images. Each image are then being put into the filtering process, edge detection process and finally identify of the object in the image. In the final stage of the process will be determine the result of the identity of the object. If the object in the image can be identify the display of the result would be appeared otherwise if cannot be identify then it would be no result will be display.
The main objective of this project is to develop an algorithm that is able to
Identify of the object in the image especially vehicle. Although the algorithm has a reasonable success rate, but it has some limitation in getting a desired result and it is depends on the quality of the image. Thus, the performance can be improved and the present algorithm can be further developed for better reliability and effectiveness.
4.2 Recommended and Future Work
There are several ways that can be considered to improve the algorithm of this project. Other image processing technique or mechanism can be incorporated to increase the performance of this project.
4.2.1 Video Processing
Using video processing it can improve the performance of this project because for the real time project it is very suitable to identify the object in the image such as tracking the vehicle moving.
4.2.2 Tracking moving object
Object tracking can be incorporated in the algorithm to improve the detection process. Object tracking is able to detect the moving vehicle that
is partially emerged in the camera view range. The tracking algorithm can be
done using motion segmentation. A discrete feature-based approach is applied to compute Object tracking can be incorporated in the algorithm of the detection process. Object tracking is able to detect the moving vehicle that is partially emerged in the camera view range.
1. Arjomandi, D. M. (2006). CLASSIFICATION OF UNMANNED. Retrieved from http://www.airforce-technology.com/projects/x47/: http://www.airforce-technology.com/projects/x47/
2. Bailey, M. (2006). UAV History. History of UAV , 3.
3. Duane hanselman, Bruce Littlefield. (2005). Mastering MATLAB 7. In B. L. Duane hanselman, Mastering MATLAB 7 (p. 835). united state: Pearson Prentice Hall.
4. http://www.aquaphoenix.com/lecture/matlab10/page3.html. (n.d.). Retrieved october 2, 2010, from http://www.aquaphoenix.com/lecture/matlab10/page3.html: http://www.aquaphoenix.com/lecture/matlab10/page3.html
5. http://www.draganfly.com/uav-helicopter/draganflyer-x4/features/. (n.d.). Retrieved october 26, 2010, from http://www.draganfly.com/uav-helicopter/draganflyer-x4/features/: http://www.draganfly.com/uav-helicopter/draganflyer-x4/features/
6. http://www.draganfly.com/uav-helicopter/draganflyer-x6/features/. (n.d.). Retrieved october 2010, 26, from http://www.draganfly.com/uav-helicopter/draganflyer-x6/features/: http://www.draganfly.com/uav-helicopter/draganflyer-x6/features/
7. http://www.mathworks.com/products/image/. (n.d.). http://www.mathworks.com/products/image/. Retrieved from http://www.mathworks.com/products/image/: http://www.mathworks.com/products/image/
8. http://www.mathworks.com/products/matlab/description1.html. (1994-2010). Retrieved october 14, 2010, from http://www.mathworks.com: http://www.mathworks.com/products/matlab/description1.html
9. Meiqing Wang & Choi-Hong Lai. (200). A Concise Introduction to Image Processing Using C++. London: CRC Press Taylor & Francis Group.
10. Milan Sonka,Vaclav Hlavac & Roger Boyle. (200). Image processing,analysis and machine vision. Chapman & Hall computing.
11. Woods, Rafael C.Gonzalez & Richard E. (2008). Digital Image Processing third Edition. New Jersey: PEARSON Prentice Hall.