Burraq Uav System Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Pakistan being a developing country does not possess the economy as well as state of the art technologies to produce modern and expensive defense equipment involving highly expensive research and development facilities. Therefore a large percentage of defense products are procured from developed nations like USA, France, Germany, UK etc and the procurement is subject to highly strict intellect and technical rights. Unmanned aerial vehicles are amongst the latest gadgets developed based on the modern technologies and hence are a trade mark only to the armies of developed modern countries. Unmanned Ariel Vehicles are very expensive equipment and the existing version of the BURRAQ (UAV) comes with inherent limitations.

The focus of the current research is to carry out study of the existing OEM specific media player (installed at BURRAQ UAV system), remove interlacing of cameras in the video caused due to proprietary implementation of the MPEG standard, and generate flicker less video at display, save flicker less videos of all or any number of cameras for later viewing and analysis, compress the generated video for further distribution through existing Communication mediums and standards, extract GPS data encrypted into the audio signal, and design as well as develop a customized enhanced version of the system.

An unmanned aerial vehicle (UAV; also known as a remotely piloted vehicle or RPV, or Unmanned Aircraft System (UAS)) is an aircraft that flies without a human crew on board. Their largest uses are in operations involving reconnaissance and surveillance missions, surgical strikes and covert destruction missions. To distinguish UAVs from missiles, a UAV is defined as a reusable, uncrewed vehicle capable of controlled, sustained and level flight, powered by a jet or reciprocating engine. Therefore, cruise missiles are not considered UAVs, because, like many other guided missiles, the vehicle itself is a weapon that is not reused, even though it is also unmanned and in some cases remotely guided. The version of the BURRAQ UAV has some limitations. Hence, in this project detailed research and study of this existing system is carried with an aim to overcome the existing problems and provide a technically sound and up to date software application.

Scope and Objectives of the Project

This project has following objectives

Study the design and architecture of existing systemwith a view to understand the working and organization of different modules, using available software and system analysis techniques like Disassemblers andDecompilers,and carry out static and dynamic analysis of the program execution.

Design and develop an interface to extract the GPS data encoded in the audio stream, for any further use with other mapping applications.

Design and develop an interface to view the received video flicker less and separately for all or any cameras as selected by the viewer.

Design and develop an algorithm as well as interface for generating flicker less video of all or any cameras, as selected by the user. This involves modifying the existing implementation of MPEG standard, as used by the on board video processor for generating video signal.

Design and develop an interface to compress the generated video using existing mpeg standards so that it can be viewed later using any free source Media player, flickerlessly.

Design and develop an interface, to extract the Image/Images of any location in video, as selected by the user, for further use in Image Based Maps.

1.3 Potential Difficulties and Problems

The system has some difficulties which are discussed as below

As the existing system is without the source codes for its software modules. Hence studying and analyzing a sourceless application is quite difficult and requires a large number of software fields to be studied and explored.

Analysis of such modules is only possible through generation of respective assemblies which requires understanding structure, organization and architecture of multiple programming languages including assembly, C, C# and many other intermediate languages.

Proprietary encryption algorithms come with inherent security and secrecy features thus making them almost unbreakable hence making the analysis and study extremely difficult and technically demanding.

Intended Audience

This document is intended for

Developers: in order to be sure they are developing the right project that fulfills requirements provided in this document.

Users: in order to get familiar with the idea of the project and suggest other features that would make it even more functional.

Documentation writers: to know what features and in what way they have to explain. What security technologies are required, how the system will response in each user's action etc.

Advanced end users, end users/desktop and system administrators: in order to know exactly what they have to expect from the system, right inputs and outputs and response in error situations.

Chapter 2

Related Work

2.1 Existing System

Pakistan is currently having BURRAQ UAV on its inventory which is a lightweight medium range reconnaissance and surveillance UAV system.It isbeing used for reconnaissance and surveillance operations. The UAV transmits its data to a ground station where it is viewed using customized hardware and software applications provided by the manufacturer.BURRAQ UAV system is utilizing a customized media player, which decodes GPS data from audio stream using a proprietary format. If this GPS data can be retrieved, it can be of great help and can be a foundation to many other applications. Therefore this project involves detailed study and analysis of the existing system being used to decode and display the audio and video information transmitted by BURRAQ UAV. The UAV sends data to its ground station in the form of AV signal containing video streams captured by the cameras and an audio signal.

2.1.1 Video Data

Video signal is generated by six cameras that are sending their video data simultaneously on a wireless link at 30 frames per second. One camera is main camera, and others are sub cameras. Out of 30 frames per second, 25 fps belongs to main camera and one frame each for every auxiliary camera. During video broadcast, smooth transmission is not possible due to inherent structure of MPEG standardbeing used by the on board video processor and lot of flickering as well as loss of data is experienced.

Main Camera

Main Camera

Main Camera

Sub Cam 1

Main Camera

Main Camera

Main Camera

Main Camera

Main Camera

Main Camera

Main Camera

Sub Cam 2

Main Camera

Main Camera

Main Camera

Main Camera

Sub Cam 3

Main Camera

Main Camera

Main Camera

Sub Cam 5

Main Camera

Main Camera

Main Camera

Sub Cam 4

Main Camera

Main Camera

Main Camera

Main Camera

Main Camera

Frame Details of the existing video format

2.1.2 Audio Data

The audio signal is generated by encrypting the GPS data being calculated, through an onboard microcomputer. This audio signal is only readable by the BURRAQ MEDIA PLAYER.

2.2 Problems in Existing System

The existing system of BURRAQ has following limitations

Audio signal containing the Geographical data of underlying video is readable only to the 'UAV Media Player' and remains hidden for any other module to work with.

The GUI of the Ground Control System provides the geographical data only in 'read only mode' hence preventing any further automated use with other applications like Digital Maps, Google Maps etc.

Existing GUI is virtually dumb providing no usable output for further use in area mapping and marking.

The MPEG standard and its implementation used by the existing system provides a flickering video making it almost useless for any tactical analysis and planning.

Existing system provides an interface only to work with videos of a specified area. Therefore Aerial photographs, that can be used to generate Image Based Maps, cannot be extracted from the video using available functionalities.

Existing system does not provide any means for generating separate compressed videos for all cameras that can be further distributed to other HQs, where they can be viewed flicker less using any free source media player.

2.3 Analysis of Existing System

To improve/modify the functionality of an existing executable application, a detailed study of the design and architecture is required. In a normal scenario the source codes as well as the supporting documentation will provide required information about the system, but in international standard exchanges neither the source code nor the related documentation is provided with the equipment hence making it very difficult to understand the internal structures of a software system. For studying such systems however various tools and techniques have been developed and evolved in the form of applications that provide the machine code equivalent of any executable software.In multimedia related such applications following two major areas are focused

Audio Analysis

Video Analysis

2.3.1 Audio Analysis

Audio analysis refers to the extraction of information and meaning from audio signals for analysis, classification, storage, retrieval, synthesis, etc. Audio analysis is carried out to find out bit contents of any wave sound. The wave sound is converted to respective bits and then the pattern is observed and analyzed for any underlying information about the contents. In case of this project the audio generated through the UAV on board microcomputer consists of GPS data masked into audio through a proprietary algorithm. So the audio is analyzed both in frequency as well as time domain. Digital signal analysis was also carried out and bit packets were generated to find any patterns of GPS data but no favorable results could be found.

Audio analysis

2.3.2 Video Analysis

Video Content Analysis - is the capability of analyzing video to detect and determine temporal events not based on a single image. It is used in a wide range of domains including entertainment, health care, retail, automotive, transport, safety and security. The algorithms can be implemented as software on general purpose machines, or as hardware in specialized video processing units.

Video analysis is carried out to understand the placement and resemblance of frames in a particular video. The video is actually series of pictures bound together and displayed in a pattern depending upon the frame rate etc. Hence in video analysis, the video is broken up into individual frames to find the placement, pattern and rate of frames in a particular video.

Video Analysis

2.3.3 Software Analysis tools being used in research

2.3.3.1 Disassembler

A disassembler is a computer program that translates machine language into assembly language-the inverse operation to that of an assembler. A disassembler differs from a decompiler, which targets a high-level language rather than an assembly language. Disassembly, the output of a disassembler, is often formatted for human-readability rather than suitability for input to an assembler, making it principally a reverse-engineering tool.

Assembly language source code generally permits the use of constants and programmer comments. These are usually removed from the assembled machine code by the assembler. If so, a disassembler operating on the machine code would produce disassembly lacking these constants and comments; the disassembled output becomes more difficult for a human to interpret than the original annotated source code. Some disassemblers make use of the symbolic debugging information present in object files such as ELF. The Interactive Disassemblerallow the human user to make up mnemonic symbols for values or regions of code in an interactive session: human insight applied to the disassembly process often parallels human creativity in the code writing process.

Disassembly is not an exact science: On CISC platforms with variable-width instructions, or in the presence of self-modifying code, it is possible for a single program to have two or more reasonable disassemblies. Determining which instructions would actually be encountered during a run of the program reduces to the proven-unsolvable halting problem.

IDA Pro Disassembler

2.3.3.2 Decompilers

A decompiler is the name given to a computer program that performs the reverse operation to that of a compiler. That is, it translates a file containing information at a relatively low level of abstraction (usually designed to be computer readable rather than human readable) into a form having a higher level of abstraction (usually designed to be human readable).

The term decompiler is most commonly applied to a program which translates executable programs (the output from a compiler) into source code in a (relatively) high level language which, when compiled, will produce an executable whose behavior is the same as the original executable program. By comparison, a disassembler translates an executable program into assembly language (and an assembler could be used to assemble it back into an executable program).

Decompilation is the act of using a decompiler, although the term, when used as a noun, can also refer to the output of a decompiler. It can be used for the recovery of lost source code, and is also useful in some cases for computer security, interoperability and error correction. The success of decompilation depends on the amount of information present in the code being decompiled and the sophistication of the analysis performed on it. The bytecode formats used by many virtual machines (such as the Java Virtual Machine or the .NET FrameworkCommon Language Run time) often include extensive metadata and high-level features that make decompilation quite feasible. The presence of debug data can make it possible to reproduce the original variable and structure names and even the line numbers. Machine language without such metadata or debug data is much harder to decompile.

Some compilers and post-compilation tools produce obfuscated code (that is, they attempt to produce output that is very difficult to decompile). This is done to make it more difficult to reverse engineer the executable.

VB Decompiler

2.4Details of MPEG Standard used

The Motion Picture Experts Group (MPEG) is a working group of experts that was formed by the ISO to set standards for audio and video compression and transmission. It is a group of standards for encoding and compressing audiovisual information such as movies, video, and music. MPEG is asymmetric in nature, as the compression process is time consuming and processor-intensive, whereas the decompression process is rapid and involves relatively inexpensive equipment. MPEG compression is as high as 200:1 for low-motion video of VHSquality, and broadcast quality can be achieved at 6 Mbps. Audio is supported at rates from 32 kbps to 384 kbps for up to two stereo channels. MPEG specifies lossy compression in the form of discrete cosine transform (DCT). MPEG is a joint technical committee of the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC).

2.4.1 MPEG-4 Standard

MPEG-4 is a patented collection of methods definingcompression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and videocoding formats and related technology agreed upon by the ISO/IECMoving Picture Experts Group (MPEG) (ISO/IEC JTC1/SC29/WG11) under the formal standard ISO/IEC 14496 - Coding of audio-visual objects. Uses of MPEG-4 include compression of AV data for web (streaming media) and CD distribution, voice (telephone, videophone) and broadcasttelevision applications.

MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2 and other related standards, adding new features such as (extended) VRML support for 3D rendering, object-oriented composite files (including audio, video and VRML objects), support for externally-specified Digital Rights Management and various types of interactivity. AAC (Advanced Audio Coding) was standardized as an adjunct to MPEG-2 (as Part 7) before MPEG-4 was issued.

Most of the features included in MPEG-4 are left to individual developers to decide whether to implement them. This means that there are probably no complete implementations of the entire MPEG-4 set of standards. To deal with this, the standard includes the concept of "profiles" and "levels", allowing a specific set of capabilities to be defined in a manner appropriate for a subset of applications.

Initially, MPEG-4 was aimed primarily at low bit-ratevideocommunications; however, its scope as a multimedia coding standard was later expanded. MPEG-4 is efficient across a variety of bit-rates ranging from a few kilobits per second to tens of megabits per second. MPEG-4 provides the following functionalities:

Improved coding efficiency over MPEG-2.

Ability to encode mixed media data (video, audio, speech)

Error resilience to enable robust transmission

Ability to interact with the audio-visual scene generated at the receiver

In the existing BURRAQ UAV system the implementation of this MPEG standard is used as such that it generates video of frames from different cameras. The placement of a different frame at every sixth index hence reduces the efficiency of the MPEG standard as well as introduces a flicker in the video that makes it jerky while viewing and analyzing.

MPEG-4 System Layer

2.5 Techniques to hide data in audio

Data hiding in audio signals is especially challenging, because the human auditory system (HAS) operates over a wide dynamic range. The HAS perceives over a range of power greater than one billion to one and a range of frequencies greater than one thousand to one. Sensitivity to additive random noise is also acute. The perturbations in a sound file can be detected as low as one part in ten million (80 dB below ambient level).

Block diagram of data hiding and retrieval.

However, there are some "holes" available. While the HAS has a large dynamic range, it has a fairly small differential range. As a result, loud sounds tend to mask out quiet sounds. Additionally, the HAS is unable to perceive absolute phase, only relative phase. Finally, there are some environmental distortions so common as to be ignored by the listener in most cases. We exploit many of these traits in the methods we discuss next, while being careful to bear in mind the extreme sensitivities of the HAS.A large number of techniques are used to hide data in audio. Some of them are explained below

2.5.1 Low-bit coding

Low-bit coding is the simplest way to embed data into other data structures. By replacing the least significant bit of each sampling point by a coded binary string, we can encode a large amount of data in an audio signal.

Ideally, the channel capacity is 1 kb per second (kbps) per 1 kilohertz (kHz), e.g., in a noiseless channel, the bit rate will be 8 kbps in an 8 kHz sampled sequence and 44 kbps in a 44 kHz sampled sequence. In return for this large channel capacity, audible noise is introduced. The impact of this noise is a direct function of the content of the host signal, e.g., crowd noise during a live sports event would mask low-bit encoding noise that would be audible in a string quartet performance. Adaptive data attenuation has been used to compensate this variation.

2.5.2 Parity coding

One of the audio data hiding technique is parity coding technique. Instead of breaking a signal down into individual samples, the parity coding method breaks a signal down into separate regions of samples and encodes each bit from the secret message in a sample region's parity bit. If the parity bit of a selected region does not match the secret bit to be encoded, the process flips the LSB of one of the samples in the region. Thus, the sender has more of a choice in encoding the secret bit, and the signal can be changed in a more unobtrusive fashion.

2.5.3 Phase Coding

The phase coding method works by substituting the phase of an initial audio segment with a reference phase that represents the data. The phase of subsequent segments is adjusted in order to preserve the relative phase between segments. Phase coding, when it can be used, is one of the most effective coding methods in terms of the signal-to perceived noise ratio. When the phase relation between each frequency component is dramatically changed, noticeable phase dispersion will occur.

Parity Coding Procedure

However, as long as the modification of the phase is sufficiently small (sufficiently small depends on the observer; professionals in broadcast radio can detect modifications that are unperceivable to an average observer), an inaudible coding can be achieved. Phase coding relies on the fact that the phase components of sound are not as perceptible to the human ear as noise is. Rather than introducing perturbations, the technique encodes the message bits as phase shifts in the phase spectrum of a digital signal, achieving an inaudible encoding in terms of signal-to-perceived noise ratio.

The signals before and after Phase coding procedure

2.5.4 Spread Spectrum

In a normal communication channel, it is often desirable to concentrate the information in as narrow a region of the frequency spectrum as possible in order to conserve available bandwidth and to reduce power. The basic spread spectrum technique, on the other hand, is designed to encode a stream of information by spreading the encoded data across as much of the frequency spectrum as possible. This allows the signal reception, even if there is interference on some frequencies. While there are many variations on spread spectrum communication, we concentrated on Direct Sequence Spread Spectrum encoding (DSSS). The

DSSS method spreads the signal by multiplying it by a chip, a maximal length pseudorandom sequence modulated at a known rate. Since the host signals are in discrete-time format, we can use the sampling rate as the chip rate for coding. The result is that the most difficult problem in DSSS receiving, that of establishing the correct start and end of the chip quanta for phase locking purposes, is taken care of by the discrete nature of the signal. Consequently, a much higher chip rate, and therefore a higher associated data rate, is possible. Without this, a variety of signal locking algorithms may be used, but these are computationally expensive.

Spread spectrum encoding

2.5.5 Echo Hiding

In echo hiding, information is embedded in a sound file by introducing an echo into the discrete signal. Like the spread spectrum method, it too provides advantages in that it allows for a high data transmission rate and provides superior robustness when compared to the noise inducing methods. If only one echo was produced from the original signal, only one bit of information could be encoded. Therefore, the original signal is broken down into blocks before the encoding process begins. Once the encoding process is completed, the blocks are concatenated back together to create the final signal.

Echo hiding

A message can also be encoded using musical tones with a substitution scheme. For example, a Fist one will represent a 0 and a C tone represents a 1. A normal musical piece can now be composed around the secret message or an existing piece can be selected together with an encoding scheme that will represent a message.

Chapter 3

Requirement specifications

3.1 Functional Requirements of the Project

The system should implement MPEG standard such that the flickers are removed and videos displayed separately for each camera selected.

The system should be able to read the geographical data of ground locations (frames captured from onboard cameras) encoded in audio stream.

The system should be able to generate separate compressed videos for all selected cameras using MPEG standard so that it can be viewed later without flicker and using any open source media player.

The system should be able to tag the geographical info to its corresponding locations (Frames)

The system should be able to retrieve locations (frames) based on geographical coordinates.(optional)

The system should be able to provide compatible data for further use with applications like Google Maps etc.

3.2 Non Functional Requirements of the Project

Security

The system should be secure in a sense that the information should be received by the intended user only.

Reliability

The system should be reliable in a sense that the system should provide the users with the required functionality round the clock.

Maintainability

The system will be made maintainable so that incase of error or the user complaints the system might be changed to satisfy the new needs or to correct the errors.

Reusability

The system will be made reusable by making the application open source.

Chapter 4

System Design

4.1 Design of Proposed System

Keeping in view the requirement specifications of the system, following components are considered necessary to be designed and integrated

4.1.1 Main Interface

Main Interface would be the basic component of the system that receives the AV file containing audio and video data sent from the UAV. It would carry out the necessary normalizations of the file if required, generates the normalized video for further use by under lying components.

4.1.2 Video Display

This is the component that provides a smooth video at display by removing the flicker and setting the frames at respective display areas for different cameras. It would also provide the means of extracting any frame as required by the user and save to a desired location for further use.

4.1.3 Video Generator

This component would generate separate videos for respective cameras, overcoming the existing MPEG implementation weaknesses. The generated videos of selected cameras and their respective frames from the incoming video would be saved as separate video files at desired locations.

4.1.4 Video Compressor

This component would provide means to compress the generated videos using existing Mpeg standards so that later the videos can be viewed using any free source media players.

4.1.5 Audio Data Reader

It is a component that reads the data hidden in audio signal using existing media player and stores it to a desired location for further use. It is required either to decode the audio information using the encoding information or read the data directly from the memory of the existing media player.

4.1.6 Frame Tagger

The component that, when required by the user, provides means to save any particular image of a location along with its GPS coordinates tagged, to a desired location.

4.2 Proposed System Architecture

4.3 Use Case Diagram for BURRAQ

4.3.1 Use case UC1: AV Signal

Primary Actor: UAV

Preconditions: Connection with the Ground Station

Success Guarantee (Post conditions): AV Signal is sent to the Ground Station

Main success scenario / Basic flow:

Ground Station gets the real time data in the form of AV Signal.

Ground Station store the data in its archives.

Extensions/ Alternative flow: N/A

Special requirements: Video should be noise free and Camera should be calibrated.

Technology and Data Variations List: Read video from camera if required.

4.3.2 Use case UC2: MPEG Standard

Primary Actor: Ground Station

Preconditions: AV Signal is available. Video is displayed with an inherent flicker.

Success Guarantee (Post conditions): Outputs flicker less display of video

Main success scenario / Basic flow:

Ground Station uses the customized MPEG standard.

Input to the MPEG Standard is the AV Signal obtained from the UAV.

MPEG Standard removes the flickers from the video.

MPEG Standard smoothes the displayed video.

If required then the MPEG Standard stores the flicker less Video in the database.

Extensions/ Alternative flow: N/A

Special requirements: No special requirement

Technology and Data Variations List: No technology and data variations.

4.3.3 Use case UC3: GUI for Geographical Tagging

Primary Actor: Ground Station

Preconditions: AV Signal is available.

Success Guarantee (Post conditions):Extracts the geographical data (longitude and latitudes etc) from audio in the AV Signal of the path which has been traversed by BURRAQ.

Main success scenario / Basic flow:

Input to the interface is the AV signals.

The interface extracts the geographical data from the audio.

The system returns the output in the form of geographical data and stores the output to the database if needed.

Extensions/ Alternative flow: N/A

Special requirements: No special requirement

Technology and Data Variations List: No technology and data variations

4.3.4 Use case UC4: Extended Interface

Primary Actor: Ground Station

Preconditions: Smoothened video is available in the database. Geographical coordinates are available in database. Frames are available.

Success Guarantee (Post conditions): The interface tags the geographical coordinates onto the corresponding frames.

Main success scenario / Basic flow:

The ground station uses the extended GUI module.

Inputs to this module are maps and geographical coordinates which are obtained from the AV Signal by the Interface for geographical tagging.

Module then tags the map according to the coordinates.

Then it stores the output in the databases if needed.

Extensions/ Alternative flow: N/A

Special requirements: No special requirement

Technology and Data Variations List: No technology and data variations.

4.4Sequence Diagrams

4.4.1 Sequence Diagram of UC1: AV Signal

4.4.2 Sequence Diagram of UC2: MPEG Standard

4.4.3 Sequence Diagram of UC3: GUI for Geographical tagging

4.4.4 Sequence Diagram of UC4: Extended Interface

Chapter 5

Implementation

The designed system is implemented using the following methodologies discussed separately for each component

5.1 Main Interface

As discussed in design, the main interface is required to receive the AV file being sent down from UAV and normalize it if required. Hence the implementation of this component is divided into two parts

5.1.1 Capture Video

The basic requirement of the main interface is to capture the video into our system so that individual frames can be accessed later on when required. This feature is implemented usingMicrosoft® DirectShow® Editing Services (DES). It is an application programming interface (API) that greatly simplifies the tasks involved in video editing. DES is built on top of the core DirectShow architecture. It abstracts much of the complexity of DirectShow, and provides a set of interfaces designed specifically for manipulating video editing projects. As an application developer, you get the benefits of DirectShow inside a framework much better suited for creating video editing applications.

Using Direct Show lib's public interface IMediaDetwe have created our own class that uses the functions of this API to actually capture the incoming video for further editing and use. Information from the incoming video file header is extracted to calculate Frame rate and length of the media. This information is further used to calculate total number of frames in the video file. Hence it makes our input video file accessible at each frame separately.

GetImage (image number) is the implemented functions that provide us an interface to access each frame separately in the video. Our implemented class is compiled as a DLL file providing us all functionalities through an external interface.

Hence after the implementation of this component any incoming video can be captured and accessed at individual frame level.

5.1.2 Normalize Video

As discussed earlier the video input received at our main interface contains captured frames from a number of cameras binded together as a video stream through an onboard video processor. Since we know the placement of frames from different cameras in the video hence the video can be broken every second to retrieve frames of each camera separately. But for this we need a start point or the first frame to start counting from. For this purpose we have implemented the function for normalization. It captures the first group of six frames that would always contain frames from both main as well as auxiliary camera as per the design of onboard MPEG standard being used. For each image it calculates the RGB color values for each pixel and saves them in a separate data structure. Then it calculates the difference in value for each pixel for each pair of images captured.

This calculated difference of each pixel is then summed up and average value of difference in complete image is calculated. Hence providing us with the numeric values of difference between each pair of images. This difference is then used to find out whether or not the images belong to same camera. Basing on experimental results we have introduced a threshold value of difference in frames that decides the above. So amongst the subset of six frames we can exactly select the starting image of sequence for our use by discarding anything less than six images in the start.

5.2 Video Display

Video display is required to split the incoming video into respective camera frames and display them separately. The number of cameras which are to be displayedis selected by the viewer. So this requirement is implemented through the PLAYFRAMES () function. This function uses the captured and normalized video and accesses it frame by frame. Keeping in view the known organization of the frames in the video a display algorithm is developed that displays the frames as per their respective cameras at different display panels. Since we know that for every set of 30 frames per second, main camera has 25 frames and aux cameras have one frame each so while displaying these frames we display each aux frame 25 times to keep the frame rate of the video constant.

5.3 Video Generator

This component is required to overcome the limitations of the existing MPEG standard being used which generates the video sequence using frames captured by different cameras. It creates instance of our video capture class and then accesses the video at each frame level. Depending upon the user's selection, frames from selected cameras are added separately into video streams. These video streams are created using Microsoft Audio Video Interleave File support library also known asAvifil32.dll.Avifil32.dll is a 32/64-bit Dynamic Linked Library of code components for a graphics UI style application.This library is used in our customized class Video Stream. CreateStream () function of this class creates separate streams as required for each camera video.Again depending upon the selection algorithm, frames from each camera are separately added to their respective streams using Add Frame() function of our video stream class, hence creating AVI files each containing frames from a single camera. These created AVI files are a sequence of uncompressed images, each comprising of frames captured by a single camera. Here again the mismatch of numbers in frames is covered by adding each frame of aux cameras 25 times. Hence this component generates multiple AVI streams of same length but from different cameras with no inherent flicker due to onboard MPEG standard.

5.4 Video Compressor

A component required to compress the created videos by video generator. It takes input the generated AVI files and compresses them into compressed video streams. This component is also implemented using windows API Avifil32.dll. This API is used to create our new class of AVI file that implements functions to create compressed streams.

Since the onboard MPEG standard generates the compressed stream of frames captured by different onboard cameras hence it has an inherent over head of keeping more key frames. For every 30 frames in a second, first five frames of main camera are compressed and then a frame is received from one of the aux camera hence is saved as key frame, next packet of five frames would again require an additional key frame to be saved. Therefore the existing MPEG standard has an overhead of saving 11 additional key frames every second and also a disturbing flicker due to placement of aux frames between packets of main frames. Our algorithm however over comes both these problems, firstly by removing the overhead of 11 additional key frames, as all frames are from same camera so every second requires only one key frame and secondly by removing the flicker as each generated compressed video stream comprises of frames form a single selected camera.

5.5 Audio Data Reader

This component is implemented using the second approach discussed above in design where we are reading the data hidden in audio through the existing media player. The memory distribution and location of data required is found through detailed analysis and study of the existing system using disassemblers and Decompilers. This process involved detailed study of around fifteen thousand lines of assembly code generated through disassembling the sourceless existing media player. The study was carried out using a number of tools available for memory tracing and tracking, generating assemblies from machine code of an executing application. The distribution of memory and virtual addressing of existing media player was studied, locations and addresses of the desired data was found and traced, and using results of this study algorithm to retrieve the data was formulated.

The existing player is embedded into our application and during its execution the respective memory locations are accessed to get the desired data. This access of memory is carried out using a dynamic code injection technique. In dynamic code injection a predefined instruction set is injected into an ongoing process for execution. Using a code snippet for reading process memory the existing media player is executed and desired information is retrieved and written to a desired location.

5.6 Frame Tagger

A component that is required to save a selected frame tagged with its geographical data at a desired location. It is also implemented through our video capture functions. Image of any location in video can be retrieved from display and written to a file through this component.

Chapter 6

Results and Analysis

Mpeg standard

Existing Result

It provides a video with inherent flicker due to customized implementation of MPEG.

Achieved Result

The new implementation of video generating algorithm and MPEG standard removes the flicker and generates smooth video.

6.2 Video display

Existing Result

It has a single display panel that shows the video from all cameras in the same stream.

Achieved Result

Now we have separate display of panel for each camera (MAIN AND AUXILLARIES) showing a smooth and flicker less video.

6.3 Audio data reader

Existing Result

It provides geographical data in read only format preventing any further use of the data.

Achieved Result

It reads encrypted data in the form that can not only be saved but can be further used for automated applications like goggle.

6.4 Compression

Existing Result

At present compression algorithm comes with an inherent overload of saving additional key frames for every second of video.

Achieved Result

Now we have overcome this additional overhead by generating separate videos frames capturedby each camera.

6.5 Video Generator

Existing Result

It creates a single stream of video comprising of frame captured by different cameras hence generating an interlaced video.

Achieved Result

Nowit generates separate video streams comprisingof framesfrom separate cameras thus removing interlacing limitation.

6.6 Frame Extractor

Existing Result

It provides no means for extracting individual frames from the video thus making it a virtually dumb module.

Achieved Result

Now it can retrieve and save any frame from the video at display.

Chapter 7

Conclusions and Future Work

Conclusion and Future Work

In this chapter we compare the already existing system with the one we have made. The existing system of BURRAQ has some constraints which limits the working of the system and also its usage in other applications. We have tried to improve the existing system so that it can work better to improve the efficiency of the system and also it can be used in future applications as well.

As already discussed,the existing system has some limitations which hindered itssmooth and efficient working. Currently the deployed system has the following limitations

The audio signal containing the Geographical data of the underlying video is readable only to the 'UAV Media Player' and remains hidden for any other module to work with.

The GUI of the Ground Control System provides the geographical data only in 'read only mode' preventing any further use of the data.

Existing GUI is virtually dumb providing no usable output for further use in area mapping and marking.

The MPEG standard provides a flickering video making it almost useless for any tactical analysis and planning.

We have made an endeavor to make the enhanced existing system which work better and have tried to eradicate the beyond discussed limitations. We have also integrated new functionalities in the system, which make our system superiorwith theexisting version in many aspects. Our improved system has the following new functionalities as followscan be used in other applications as well.

A GUI is provided to the existing module with an aim of providing the readable data in usable form for further mapping applications.

A new MPEG standard is developed to reduce/remove the flickering effects in the video stream hence providing a smooth video stream for tactical and planning analysis.

The developed interface can be utilized to use the Geographical data to map the video over digital mapping applications like Google Maps.

The extracted data can be used to create maps of specified areas for further tactical/strategic use.

The above discussed implementations will improve the efficiency of the existing system in future applications.

Chapter 8

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.