This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Chapter 1 : Introduction and project overview
Final year project(fyp) is a milestone for a undergraduate to prove that I am capable to develop a project base on the knowledge that had been gain throughout the years of study in Multimedia University in the lecture classes and completion of assignment.
Final Year Project Report Objective
- To present the work that had done in good quality which is short but compact with details of project done
- To produce report in a logical flow and strong explanation so it is more understandable by others
- To assist students in preparing a report in accordance to standards set by a body
- Useful to students to find jobs after graduate as prove of able to develop a subject given independently or cooperation.
Overview of Project Title
Flapping wing surveillance mini aerial vehicle is a type of embedded image application robot that is capable of carrying a visual processing system while flying in the air. The initialization of this project is to develop an mini aerial vehicle with video processing ability for visual information processing with minimal attraction from the public eyes. The application is for military reconnaissance activities and simultaneous localization and mapping activities for outdoor and movable surveillance camera for indoor application. PIC18f2620 is the main microcontroller to control the retrieve of visual image and process it to store in an external storage media or life feed image to a terminal. Image is capture with a small camera called uCAM which is a serial JPEG camera module to capture pictures through EUSART communication protocol. The image data is process and stored in a external storage media(SD card) with SPI communication. The mini aerial vehicle type chosen to mount the camera system is a flapping wing type MAV called ornithopter.
Chapter 2 : Literature Review and Theoretical Background
Mini Aerial Vehicle
Background on Unmanned Aerial Vehicle and Mini Aerial Vehicle
Mini aerial vehicle (MAV) is one of the categories of unmanned aerial vehicles (UAVs). UAVs are commonly used in military applications for recognition, environmental observation and maritime surveillance. It is also used for non military applications such as environmental observation, rice paddy remote sensing and infrastructure maintenance. The term UAV covers all mechanical and electronics engineered flying objects which are flying in the air without any pilot on board with the capability of controlling it. The remotely control aerial vehicles are clearly defined by the Unmanned Vehicle System International Association as mini, close short and medium range UAVs depending on their size, endurance, range and flying altitude. The definition of the UVS community, in which the vehicle is fit, is listed in the table below, all other kinds of aircraft that is not in the category falls into the general category of 'High Altitude Long Endurance' group.
The development of UAV has been strongly motivated by military application after World War II, nations were looking for aerial vehicles, which have the abilities for replacing deployment of human beings in high risk area for surveillance, reconnaissance and penetration of hostile terrain. Development of insect-size UAV is reportedly expected in the near future. Although military use is one of the motivating factors for UAV advancing development, UAVs are also being use commercially in scientific, police and mapping application for hazardous terrain and places which is inaccessible by ground.
There are three types of MAVs under observation, airplane-like fixed wing models, bird or insect like ornithopter (flapping wing) model and helicopter like rotary wing models. Each type of it has its own advantages and disadvantages depending on the scenario itself which it is used for. Fixed Wing MAVs can achieve longer flight time and higher efficiency but are generally hard to use in indoor activities because they cannot hover or turn tight corners which is required so there are suited for the tasks that require extended loitering times. Rotary wings allow hovering and movement in any direction; at the cost of shorter flight time as the rotor have to keep working to maintain the altitude of the vehicle. Flapping wings offer most potential in miniaturization and maneuverability but lack of power for carrying any load onboard. Figure below shows the three types of MAVs.
Ornithopter (flapping wing) model
A common believe about ornithopter or birds on how they fly is that they produce the lift force by flapping their wings, but actually they produce the way are same as an airplane, simply by their forward motion through the air. Birds move through the air when their wing is held at a fixed position when they are in a gliding position. The wings deflect the air gently downward producing a reaction force in the opposite direction with the pushed downward air when the wings are at a slight angle. This force is called lift which is the phenomena from Newton's Third Law. For any force applied on a object, a force of a same magnitude but at a opposite direction will exerts on the same surface. Lift is the force which is produced and acts perpendicular to the wings surface and prevents the bird from falling. Figure 2.4 shows how the lift force is produce when the air is directed downward. The bird will eventually slow down in the presents of air resistance or drag on the body and wings of the bird and then it would not have enough speed to continue flying. To prevent the bird from falling, the bird can lean forward a little and go into a shallow dive to produce a slightly angle forward lift force by the wings and helps the bird speed up. The bird has to sacrifice some height in exchange for increase in speed. In general, the bird is constantly losing altitude, relative to the surrounding air in exchange to maintain its speed that it needs to keep flying. Figure 2.5 shows how the birds maintain speed by a slight diving.
The slight angle of the wings which allow them to deflect the air downward and produce the lift force is called the angle of attack. The wing will suffer a lot of air resistance if the angle of attack is too great. If the angle is too small, the wing would not produce a sufficient lift force. The best angle is depends on the shape of the wing and what matters is the angle relative to the travel direction. An ornithopter wing usually made up of a thin fabric membrane, which takes on a curve or cambered shape when it is push against the air. Birds have more of a rounded leading edge to help reduce the air resistance.
The bird wings flap with an up-and-down motion, their whole body is moving forward when the wings are flapping up-and-down. There is very minimum of up-and-down flapping close to the birds body, but further apart to the wingtips, there is much more amplitude on the vertical motion. As the bird is flapping along, it needs to make sure it has a correct angle of attack all along its wingspan. Since the outer part move more steeply than the inner part, the wing has to twist, so that each part of the wing can maintain the correct angle of attack. The wings will twist automatically if the wings are flexible enough as shown in figure 2.8. As the wings move downward and twists when the whole bird went into a steep dive, the lift force at the outer part of the wing is angled forward. However, this is only the wing moving downward, not the whole bird. Therefore the bird can generate a large amount of forward propulsive force or thrust, without losing any altitude as shown in figure 2.9. The air is not only deflected downward but it also deflected to the rear of the bird. The air is forced to the back just as it would be by the propeller of an airplane. In the other hand, many people believe that the upstroke of the bird wings will somehow cancel the lift force produce during downstroke, but eventually it can be controlled by the angle of attack and birds do make the upstroke more efficient. Figure 2.10 shows that the outer part of the wing points straight along its direction of travel so it can pass through the air with the least possible air resistance. In other words, the angle of attack is reduced. Other than that, the bird partially folds its wing to reduc the wingspan and eliminates the outer part drag of the wing. The inner part of the wing is different to the outer part. There is little up-and-down movement there, so that part of the wing continues to provide lift just as a result of its forward motion. The bird's body will fly up and down slightly when the bird flies as the result of the inner part of the wings produced lift in the upstroke and the upstroke as a whole offers less lift than the downstroke. So like an airplane, lift and thrust functions are separated. The inner part of the wings produced lift and the outer part provide thrust. Figures 2.11 shows that the inner part of the wing produces lift, even during the upstroke.
Surveillance Camera System
An image sensor is a device that converts an optical image into electronic signals or in other words converts lights into electrons. Early sensors are made up of video camera tubes and nowadays, image sensor is categories into two types, which is charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) active-pixel sensor. Today, most digital cameras use CCD sensor or CMOS sensor. A easy way of understanding the technology used to perform the conversion of light from an image object is by imagining the sensor used is having a 2-D array constructed by thousands and millions of tiny solar cells, each of which transforms the lights reflected from one small portion of image into electrons. Both CCD and CMOS device perform this task with different technology used. The next step is to read the value (accumulated charge) of each cell in the image sensor. There are several parameters that can be used to evaluate performance of an imaging sensor, including its dynamic range, its signal-to-noise ratio, its low sensitivity, etc. An imaging sensor alone produce only gray-scale picture and need to pair up with color image sensors, differing by the means of the color separation mechanism to produce color images. The common color image sensor used now is Bayer sensor. Image sensors come in a variety of sizes with the smallest ones used in point and shoot cameras and the largest in professional SLRs. Consumer SLRs often use sensors having the same size as a frame of advanced photo system (APS) film. Professional SLR occasionally use sensors the same size as a frame of 35mm film. Larger image sensors generally have larger photosites that capture more light with less noise. Some typical sensor sizes is shown below in table 2.2 below
Charge-coupled Device (CCD)
A charge-coupled device (CCD) uses a special manufacturing process to create the ability to transport accumulated charges within the photoactive region to a region where the charge can be processed. This is achieved by shifting the charged signals between stages within the device one at a time. CCDs are implemented as shift registers that move charges between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. To measure accumulated signal charge when an image is projected through a lens on the capacitor array (photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. The fundamental light-sensing unit of the CCD is a metal oxide semiconductor (MOS) capacitor operated as a photodiode and storage buffer. A dimensional array will capture the images to one or two dimensional pictures according to the dimensional array used that corresponds to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the light from an image, a control circuit causes each capacitor to transfer its content to it neighbor (serial shift register). The last capacitor in the array shifts its charge into a charge amplifier or metering register, which converts the charge into corresponding voltage level. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor into a sequence of voltage. In a digital device, these voltages is sampled, digitized and stored in a memory block, and they are processed into a continuous analog signal which is then process and send out to other circuits for transmission, recoding or other processing. A summarize of a single frame capture with a full frame CCD camera system can be summarize as follow:
- Camera shutter is opened to allow accumulation of photoelectrons, with the gate electrodes biased appropriately for charge collection.
- At the end of the light exposed period, the shutter is close and accumulated charge in pixels is shifted row by row across the parallel register under the clock control signals. Rows of charge packets are transferred in sequence from one edge of the parallel register into the serial shift register.
- Charge contents of pixels in the serial register are transferred one pixel at a time into an output node to be read by a charge amplifier, which boosts the electron signal and converts it into an analog voltage signal.
- An Analog Digital Converter assigns a digital value for each pixel according to its voltage amplitude and it is stored inside a memory buffer.
- The serial readout process is repeated until all pixel rows of the parallel register are emptied.
The CCS image sensors can be manufactured in several different architectures. The most common are fill-frame, frame transfer and interline (see figure 2.15). The way to distinguish characteristics of each of this architecture is their approach to the problem of shuttering. Full frame CCD features high density pixel arrays capable of producing digital images with the highest resolution. In a full frame device (figure 2.12), all of the image area is photoactive, and there is no electronic shutter. The imaging surface which is made up of parallel shift register must be protected from the incident light during readout of the CCD. A mechanical shutter must be added to the full frame image sensor or the image smears as the device is clocked or read out. Charge accumulated while the shutter open is subsequently transferred and read out after the shutter is closed by shifting the rows of image information in a parallel fashion one row at a time, to the serial shift register, and then the serial register sequentially shifts each row of information to an output amplifier as a data stream, because two steps cannot occur simultaneously, image frames rate are limited by the mechanical shutter speed, the charge transfer rate and readout steps.
Frame transfer CCD can operate at faster frame rated than full frame devices because exposure and readout can occur simultaneously with various degrees if overlap in timing. They are similar to the full frame devices in structure of parallel register but half of the silicon surface is covered by an opaque mask typically made of aluminum and is used as the image storage buffer for photoelectrons gathered by the unmasked photoactive region. The image can be quickly transferred from the photoactive area to the opaque area or storage region with a small amount of smear of a few percent which is acceptable. That image can be read out slowly from the storage region while a new image is integrating or exposing in the photoactive area. A camera shutter is not necessary because the time required for charge transfer from the image area to the storage area of the sensor is only a fraction of the time needed for a typical exposure, which can be illustrated in figure 2.13. A common disadvantage to the frame transfer architecture is that it requires twice the silicon real estate of an equivalent full frame device; hence, it costs almost twice as much compared to full frame devices.
The interline architecture is designed to compensate for many of the shortcomings of frame transfer CCD. These devices are composed of a hybrid structure incorporating a separate photodiode and an associated parallel readout CCD storage region into each pixel element. The functions of these two regions are isolated by a metallic mask structure placed over at the light shielded parallel readout CCD area. In the design, columns of active imaging pixels and masked storage transfer pixel alternate over the entire parallel register array. Because a charge transfer channel is located immediately adjacent to each photosensitive pixel column, stored charge must only be shifted one column into another transfer channel. This single transfer step can be performed within milliseconds, after which the storage array is read out by a series of parallel shifts into serial shift register while the image area is being exposed for the next image. This architecture allows very short integration periods through electronic control of exposure intervals, and with the present of a mechanical shutter, the array can be rendered effectively light-insensitive by discarding accumulated charge rather than shifting it to the transfer channel and smear is essentially eliminated. The advantages comes with a cost, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this harmful characteristic by shielding the surface of the device to divert the light away from the opaque region and on the active area with microlenses. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.
Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night vision device, and zero-lux (luminance) photography. For normal silicon-based sensors, the sensitivity is limited to 1.1 µm. As a consequence of their sensitivity to infrared, infrared emitted from remote controls or infrared emitting devices often appears on CCD-based digital cameras if they do not have infrared filters place above the imaging area to filter out infrared wavelength and let only visible light exposures. Cooling can reduces the array's dark current and thermal noise, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths.
Although CCDs are not basically color sensitive, three different approaches are commonly employed to produce color images with CCD camera system in order to capture the visual appearance of an object. The acquisition of color images with a CCD camera requires that red, green and blue wavelengths be separated by color filters, acquired separately, and subsequently combined into a composite color image. Each approach utilized to achieve color discrimination has strength and weakness, and all imposed constraints that limit speed, reduce dynamic range, lower temporal and spatial resolution, and increase noise in color cameras compared to gray-scale cameras. The most common approach is to mask the CCD array pixel array with an alternating mask of red, green and blue (RGB) microlens filters arranged in a specific pattern, usually the Bayer mosaic pattern. Alternatively, with a three-chip design, the image is divided with a beam-splitting prism and color filters into three (RGB) components, which are captured by separate CCDs and the outputs recombined into a color image. The third approach is a frame-sequential method that used a single CCD to capture a separate image for each color sequentially by switching color filters placed in the illumination path or above the photoactive area.
CMOS Image Sensors
The CMOS Image sensor refers to the process by which the image sensor is manufactured and not to a specific technology. CMOS have a light sensing mechanism similar to CCD, by taking advantage of the photoelectric effect, which happens when light photons hit the crystallized silicon and charge up the electrons to escape from the valence band into the conduction band. CMOS sensors have low power consumption, master clock, and uses single voltage power supply. When the specially doped silicon semiconductor materials is exposed to a wide wavelength band of visible light, numbers of electrons are released proportional to the light intensity striking the surface of photodiodes. Electrons are collected in a potential well until the illumination is finished, and they are converted into voltages before passing to an ADC to form digital electronic representation of the object image. CMOS image sensor has the ability to integrate a number of processing and control function, which is more important than the primary task of photon collection, directly onto the sensor integrated circuit. These abilities generally include timing logic, white balance, ADC, shutter control, gain adjustments and image processing algorithms, the CMOS circuit architecture resembles a random-access memory more than a simple photodiode array because of its capability to perform all these operations. The most popular CMOS designs are based on active pixel sensor (APS) technology where each pixel is incorporated with both the photodiode and readout amplifier. The accumulated charge can be converted into an amplified voltage inside each pixel and then transferred sequentially to the signal processing area. Thus, each pixel contains electron charge and sent for conversion for voltage and move to a vertical column bus. It is organized in a checkerboard manner of metallic readout busses that contain signal information at each intersection. A clock out timer is applied and information are read, decode and process at a processing circuitry away from the photoactive region. This design technology allows signals from each pixel to be read with a simple X, Y pinpoint addressing technique.
There are two basics photosensitive elements architecture that exist in the modern CMOS image sensor which is photodiodes and photogates (Figure 2.16). In general, photodiodes are more sensitive to visible light, especially short-wavelength region such as blue in the color region. Photogates devices usually have larger resolution area and achieve higher charge to voltage conversion gain levels and easily be used to perform correlated double sampling to achieve frame differencing, but a lower fill factor and poorer blue light response than photodiodes. Charge accumulated from incident light is moved to a potential well controlled by an access transistor. During readout, the pixel processing circuitry performs a two-stage transfer of charge to the output bus. The first thing is conversion of accumulated charge into measurable voltage by a charge amplifier transistor. Next, the transfer gate moved the charge from the photoactive region to the output transistor when pulsed are provided, and is then passed on to the column bus. This transfer method allows two signal sampling simultaneously that can be utilized through efficient design to reduce noise reduction. The pixel output is sampled after photodiode reset, and once again after integrating the signal charge. The correlated double sampling can be performed with the photogate active architecture by subtracting out the first signal from the second to remove low frequency reset noise.
There are sequences of steps to be followed in the operational sequence of a CMOS image sensor. In most CMOS photodiode array design for utilizing black level compensation, the photoactive area is surrounded by optically shielded pixels region, arranged in 8 to 12 rows and column. The Bayer or (CMY) filter array starts with the top-left pixel of the first unshielded row and column. All of the pixels in the same row will be reset by the on-board clocking and control circuit row by row, traversing from the first to the last row addresses by the line address register when each integration period begin(see figure 2.17). the same control circuit will transfer the integrated value of each pixel to a correlated double sampling circuit and then to the horizontal shift register when the integration period has been completed. The pixel information will be serially shifted out to the analog video amplifier after the shift register had been loaded. The gain of this amplifier is controlled either by hardware or software (or in some cases, a combination of both). After the gain and offset values are written in the video amplifier, the pixel information is then passed to the ADC to be converted into binary representation in a linear digital array. Next, before being framed and shifted to the digital data output, the digital data will be further processed to remove pixel errors and to compensate black levels. The next step is image recovery and necessary algorithm for final image display encoding. Nearest neighbor interpolation is performed on the pixels, which are then filtered with anti-aliasing algorithms and scaled.
First Idea and Design
The objective of this final year project is to develop an ornithopter mini aerial vehicle equipped with surveillance system. The surveillance system should be miniature and light enough to be mounted on a MAV and has a storage media to hold the visual data captured from the camera system. The whole system is a combined of several main devices which is a microcontroller which acts as the main controller for the whole system that process data and acts as an interface device between other operating devices, a CMOS image sensor with the processing algorithm, a SD card for storage and infrared light to provide light source to a infrared wavelength sensitive camera. Figure 3.1 shows the initial building block of the circuit on the MAV
Early design iteration
In surveillance camera system design, the camera communication method and algorithm for viewing captured images is the most things to be considered when choosing the microcontroller. Cameras used on consideration are USB camera or TTL communication camera. To be able to communicate with USB powered camera, a microcontroller need to be a host USB microcontroller such as Vinculum USB microcontroller and ATMega series USB host microcontroller which is comparably higher price than normal microcontroller. Another option for camera is using TTL camera with EUSART port of microcontroller which is universal serial asynchronous transmit/receive. To display the image caption from the camera, the choice to be considered is to sending live feed video image to a viewing terminal through wireless transmission such as Bluetooth wireless, infrared wireless or RF communication. The other option is to save the captured image on a memory area such as SRAM or flash memory storage devices. Both wireless or storage media needed communication port such as data port, Serial Peripheral Interface (SPI), or Universal Serial Asynchronous Transmit/Receive (UART) to communicate.
The uCAM (microCAM) is a highly integrated serial camera module which can be attached to any host system that requires a video camera or a JPEG compresses still camera for imaging applications. uCAM uses an OmniVision CMOS VGA color sensor along with JPEG compression chip that provides a low cost and low powered camera system. The module has an on-board serial interface (TTL or RS232) that is suitable for a direct connection with any host microcontroller UART or computer serial com port. User commands are sent using a simple serial protocol that can instruct the camera to send low resolution single frame raw images or high resolution JPEG still images for storage or viewing. uCAM have several features which is very useful for developing the camera system on-board MAV.
- Small size, low cost and low powered camera module for embedded imaging application.
- uCAM-TTL: 3.3V DC Supply
- uCAM-232: 5.0V DC Supply
- On-board EEPROM provides command based interface to external host via TTL or RS232 serial link.
- UART link that supports up to 115.2Kbps for transferring JPEG still pictures
- On board OmniVision OV7640/8 VGA color sensor and JPEG CODEC for different resolutions.
- Build-in down sampling, clamping and windowing circuit for VGA, QVGA, 160x120 or 80x60 image resolutions.
- Build-in color conversion circuits for 2-bit gray, 4-bit gray, 12-bit RGB, 16-bit RGB or standard JPEG preview images.
- No external DRAM required.
The uCAM has a dedicated hardware UART that can communicate with a host via this serial port. This is the main interface used by the host to communicate with the module to send commands and receive back data. The primary features are the data are full-duplex 8-bit data transmission and reception through the TX and RX pin, 8-bits data with no parity bit and 1 start bit and stop bit, module also able to auto detect Baud rates for the incoming command from 14400 baud up to 115200 baud. The host should make connection with 14400bps, 56000bps, 57600bps and 115200bps. The module will keep using the detected baud rate until the next power cycle. A single byte serial transmission consists of the start-bit, 8-bit of data and followed by a stop bit. The start bit is always 0, while the stop bit is always 1. The Least Significant Bit (LSB, Bit 0) is sent out first after the start bit. Figure shows a single byte timing diagram. A single command consists of 6 continuous single byte transmissions. The figure shows an example of the SYNC(AA0D00000000h) command.