Handoff In Mobile Video Streaming Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The world has witnessed a large scale expansion in the field of wireless technologies and multimedia services. There is a wide range of wireless technologies like cellular networks, wireless LAN (local area networks) and WiMAX technology. Any user can now be able to use the internet on the move using many different access networks. The process of changing different types of access networks is called, "Vertical Handoff". Video streaming is the multimedia service, which is most widely used by a user around the globe. But, there are several problems in achieving seamless handoffs in a heterogeneous access networks. The object of this project is to investigate streaming of video from a server to mobile device operating in different networks. The task is to simulate a heterogeneous network which consists of Wi-Fi and WIMAX technologies and to overcome the problems involved in the handoff and find solutions to make video streaming as smooth as possible. The video coding used is H.264/AVC standard. According to [2], the new IEEE 802.21 standard specifies link layer intelligence and other related network information to upper layers in order to optimize handovers between networks of different types, such as WiMAX, Wi-Fi and 3GPP. The paper [2] also proposes a novel and very simple approach to determine the expected number of handovers in an ns-2 simulation and also evaluates the reliability and scalability of ns-2 when simulating 802.21 scenarios with multiple nodes. Simulation results will prove that the methods we do is the best suitable one.

Keywords: WiMAX, H.264/AVC, Vertical handoff, Wireless LAN, Video streaming, IEEE 802.21 standards.

Table of Contents


Table of Contents iii

List of Figures iv

List of Tables v

Acknowledgements vi

List of Abbreviations vii

1 Introduction 1

1.1 Problem Definition 1

1.2 Related works 2

1.3 Structure 2

1.4 Project Management 3

2 Background 4

2.1 Wireless Technologies 4

2.1.1 IEEE 802.11 4

2.1.2 WiMAX Technology 5

2.2 Gilbert-Elliot two state Model 7

2.3 H.264/AVC Video Coding standard 8

2.3.1 Overview 8

2.3.2 NAL and VCL layers 9

2.4 Heterogeneous Handover in NS-2 10

2.4.1 IEEE 802.21 Support 10

2.4.2 Implementation of Nodes with Multiple interfaces 12

2.4.3 Integration of WiMAX 12

2.4.4 Integration of Wi-Fi 13

2.4.5 Integration of Information Services, Command Services, and Event Services 14

2.4.6 Power Boundaries in Wi-Fi and WiMAX cells 14

3 Requirements and System design 15

3.1 Network Simulator 2 15

3.2 Cygwin 16

4 Implementation and Testing 16

4.1 EvalVid 16

4.1.1 Overview of EvalVid 17

4.1.2 Network Agents added by EvalVid 17

4.2 Simulation Scenario 18

5 Conclusion 19

6 References 20

7 Appendices 23

7.1 NS-2 Network Configuration 23

7.2 Screen shots of the implementation 38

List of Figures

Figure 1. Gantt Chart 3


Figure 2. Work Breakdown Structure 3

Figure 3. Competing Technologies 4

Figure 4. Architecture of WLAN 5

Figure 5. The WiMAX network reference model 6

Figure 6. Two state Markov channel 7

Figure 7. Transition Matrix 7

Figure 8. MIH implementation in ns-2 11

Figure 9. MIH User class hierarchy 11

Figure 10. MultiFace Node 12

Figure 11. "Phantom" nodes 13

Figure 12. Power Boundaries 15

Figure 13. Screen shot of cygwin 16

Figure 14. The Frame work for Evalvid tool-set 17

Figure 15. Interface between NS-2 and Evalvid 18

Figure 16. Node Configuration 18

Appendix Figure 1. Screen Shot of cygwin 38

Appendix Figure 2. NAM Interface 39



Appendix figure 3. Node configuration 40


Appendix Figure 4. Simulation Starts 40


Appendix Figure 5. Video Transmission starts 41


Appendix Figure 6. Handover takes place 41

List of Tables

Table 1: Supported and unsupported features of WiMAX module in NS-2 12

Table 2 : Supported MIH commands and events 14


This dissertation was performed as a part of the MSc course in Telecommunications and Information Systems, University of Essex. At first, I would like to thank God and then my parents for their support and encouragement throughout the year. This would not be possible without them.

I would like to express my gratitude to my supervisor, Dr. Martin Fleury for his involved guidance and time he dedicated to me and my work. I would thank my assessor, Professor Mohammed Ghanbari for his help and guidance.

Also, I would like to thank Mr. Salah Al-Majeed for his contribution throughout the course of the dissertation.

Shanmuga Moorthy Arun Rajah

University of Essex

August 2010.

List of Abbreviations

WLAN - Wireless Local Area Network

WiMAX - Worldwide Interoperability for Microwave Access

3GPP - 3rd Generation Partnership Project

IEEE - Institute for Electrical and Electronic Engineers

HO - Hand over

UMTS - Universal Mobile Telecommunications System

CDMA - Code Division Multiple Access

WCDMA - Wideband Code Division Multiple Access

NIST - National Institute of Standards and technology

UDP - User Datagram Protocol

TCP - Transmission Control Protocol

MIH - Media Independent handover

MPEG - Motion Picture experts Group

RTP/IP - Real-time Transport Protocol/ Internet protocol


The Internet has changed the world for the past two decades with its ability to share anything and everything across the globe. Multimedia has influenced the majority of the people using a mobile phone. We can watch any video which is available in the internet using a multimedia mobile phone or a laptop anywhere. Video streaming has become very popular in this world. The video streaming is done by encoding the entire video on a frame-by-frame basis and it is decoded back at the user end. The use of wireless technologies has enabled the users to use internet anywhere in the world. There are many technologies used such as WLAN known as IEEE 802.11 and Worldwide Interoperability for Microwave Access (WiMAX), that is IEEE 802.16. The IEEE 802.21 standard [3] focuses on the successful handover (HO) between different wireless networks like WiMAX, WLAN and UMTS (Universal Mobile Telecommunications System).

Handovers are classified into two (i) Soft handovers (ii) Hard handovers. Soft handovers is the one in which the mobile node has connection with two base stations simultaneously. This type usually use lot of network resources. It is used by CDMA and WCDMA standards.[1] Hard handovers is one in which there is only one connection at a time. All the old radio links to a base station are removed before the new links are established. Hard handovers are done when there is requirement of change in the carrier frequency.

Problem Definition

Now-a-days wireless users, demand more bandwidth and mobility, in-order to get seamless and better services. The only solution for this is to use any type of wireless networks which are in range [2]. Handover occur when a phone call in progress is transferred from its present cell to a new cell. This usually happens when the mobile device making the call finds that it is losing signal coverage and is in movement, so there is a need to "jump" to another antenna. However we know that the horizontal handovers are easier to implement because the same operation domain is typically made. On the other hand, vertical handovers are always executed between different technologies, and the signalling will be more complex.

Related works

There have been numerous research studies made use of the new IEEE 802.21 standard by the NIST. In [15], the performance of an adaptive channel scanning algorithm is evaluated using the previous version of WiMAX module. According to [16], the comparison of the handover latency for which UDP and TCP carry MIH (Media Independent Handover) signalling messages is made and also the design tradeoffs are presented. According to [17], the performance of the vertical handoff between 802.11 and 802.16 is evaluated with respect to signalling cost, handoff delay, and QoS support. An extension to the current network selection algorithms that takes into account security parameters and policies are proposed in [18]. The handover performance with and without the proposed extension is compared. Reference [19] evaluates a proposed cross-layer mechanism for making intelligent handover decisions in FMIPv6. This is done in terms of handover latency, packet loss and handover signalling overhead. The evaluation of [20] a new enhanced Media Independent Handover Framework (eMIHF) is performed which allows for efficient provisioning and activation of QoS resources in the target radio access technology during the handover preparation phase. There has been a proposal of new implementations to new security extensions for the IEEE 802.21 using the ns-2 with NIST module. In reference [21], the comparison of different authentication techniques, such as, reauthentication and preauthentication, which may be used so as to reduce the time and resources required to perform a vertical handover. In another method [22], the performance of the authentication process in media independent handovers is measured and the impact of using IEEE 802.21 link triggers to achieve seamless mobility is considered.


The structure of the remaining report is as follows. Section 2 gives the background of the project. This includes wireless technologies, Gilbert-Elliot two state model [33] and Heterogeneous handover using IEEE 802.21 standard [2]. Section 3 presents the software tools like cygwin and Network Simulator 2. Section 4 will show how the EvalVid tool-set is being used.

Project Management

Figure 1. Gantt Chart

Figure 2. Work Breakdown Structure


Wireless Technologies

The world's first wireless conversation happened in 1880 when Alexander Graham Bell and Charles Sumner Tainter invented the photophone. It is a telephone that conducted audio conversations wirelessly over modulated light beams [1]. The use of wireless technologies has increased exponentially from the past two decades.

Figure 3. Competing Technologies


IEEE 802.11

It is commonly known as Wi-Fi or WLAN. It is a set of standards used to carry out wireless local area network (WLAN) communication in the 2.4, 3.6 and 5 GHz frequency bands [1]. The media access controller (MAC) uses contention access. All the subscriber stations that want to transfer data through a wireless access point (AP) are competing for the AP's attention on a random interrupt basis. It might cause the subscriber stations to distant itself from the AP and be repeatedly interrupted by nearby stations, largely dropping their throughput. Hence the services such as Voice over IP (VoIP) or IPTV, which depend on an essentially constant Quality of Service (QoS) depending on data rate, which makes it difficult to maintain for more than a few simultaneous users.[5]

Figure 4. Architecture of WLAN


The architecture of WLAN [1] consists of Stations, Access points (AP), Basic Service Set (BSS), Extended Service Set (ESS) and Distributed System (DS). All the components which connect to the wireless medium in a network are called stations. Access points are usually the base stations for the wireless network. The BSS is a set of similar stations that communicate with each other. There are two types of BSS namely independent BSS and infrastructure BSS. The access points in the ESS are connected using the distributed system (DS).

WiMAX Technology

WiMAX (IEEE 802.16) is a telecommunications protocol that provides fixed and fully mobile internet access [1].The present WiMAX version provides up to 40 Mbit/s and with the IEEE 802.16m update expected offer up to 1 Gbit/s fixed speeds. It is built on Orthogonal Frequency Division Multiplexing (OFDM) transmission technique, which is known for the efficient usage of radio resources [4]. The WiMAX Forum describes WiMAX as "a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL" [5]. There are two types [6]: Fixed WiMAX and Mobile WiMAX. Fixed WiMAX (or IEEE 802.16d) uses High gain-low portability uni-directional antenna at user's end and provides a limited wireless broadband access. It support the sub-channels: Single High Gain carrier, OFDM 256 FFT (Fast Fourier Transform) and OFDMA 1K-FFT.[6] Mobile WiMAX(or IEEE 802.16e) takes the Wireless broadband access to much larger coverage area because of the usage of Low gain-High portability Omni-directional antennas at user's end. Mobile WiMAX also supports single carrier, OFDM 256 FFT and OFDMA 1K-FFT, OFDMA 2K-FFT, 512-FFT and 128-FFT sub-channels [6].

Figure 5. The WiMAX network reference model


The WiMAX standard defines PHY and MAC layers [4]. The WiMAX architecture is based on the use of IP protocols which are standardized and are compatible with the IP multimedia Subsytem (IMS). The reference model consists of three components MS (Mobile Station), ASN (Access Service Network) and CSN (Connectivity Service Network). These three components are interconnected by reference points R1 to R5 [4]. In this model, the Mobile Station (MS) is a generic equipment that provides connectivity between the WiMAX BS and the subscriber device. The Access Service Network (ASN) has the function of providing radio access connection to the WiMAX subscribers. It can also be deployed as a Network Access Provider (NAP) by interconnecting several ASN through the reference point R4. A NAP provides radio access infrastructure to several Network Service Providers (NSP).

MAC Layer

The MAC uses a scheduling algorithm for which the subscriber station need compete only once for initial entry into the network. Then it is allocated an access slot by the base station. The time slot can enlarge and contract, but remains assigned to the subscriber station which means that other subscribers cannot use it. The scheduling algorithm is stable under overload and over-subscription (unlike 802.11). It can also be more bandwidth efficient. The scheduling algorithm also allows the base station to control QoS parameters by balancing the time-slot assignments among the application needs of the subscriber stations [5].


Physical layer

The original WiMAX standard (IEEE 802.16) specified WiMAX for the 10 to 66 GHz range. 802.16a, updated in 2004 to 802.16-2004 (also known as 802.16d), added specification for the 2 to 11 GHz range [5]. IEEE 802.16d which is known as "fixed WiMAX" was updated to 802.16e in 2005 (known as "mobile WiMAX"). It uses scalable orthogonal frequency-division multiplexing (OFDM) as compared to the OFDM version with 256 sub-carriers used in fixed WiMAX. This version has many advantages in coverage, power consumption, self installation, frequency re-use and bandwidth efficiency. The 802.16d and .16e standards will be more interesting, since the lower frequencies suffer less from inherent signal attenuation and therefore give improved range and in-building penetration [5].

Gilbert-Elliot two state Model

The Gilbert-Elliott model is always applied for the channel modelling. The Gilbert-Elliott model is a two-state Markov chain that simulates the channel error characteristics. It has two states, one good and one bad, and for each of them the bit error probability is found. The bad state which is logical, the bit error rate rises significantly. By using this model, the possibilities of transition between these two states should be defined.

Figure 6. Two state Markov channel

In Figure above, a two-state Markov channel is shown, where the state is Good and state is Bad. For channel model, the bit error rate and the 2Ã-2 stochastic transition matrix should be found. For example an i x j matrix,

Figure 7. Transition Matrix

The transition matrix can be completely determined by Pgg and Pbb, which are the probabilities that, given that the current state is the good or bad, the next state is going to be again good or bad respectively. The mean state sojourn time is given by Tg = 1/(1-Pgg) and Tb = 1/(1-Pbb).

H.264/AVC Video Coding standard

H.264/MPEG-4 AVC is a block-oriented motion-compensation-based codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) [1]. This product is known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 - MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content. The latest video encoding standard globally is H.264/AVC [7].


The main aim of the H.264/AVC was to create a standard capable of providing video of good quality at a low rate than the standards available, without making it a complex design which is very difficult to implement practically. It should also provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems such as, low and high resolution video, including low and high bit rates, broadcast, RTP/IP packet networks, DVD storage, and ITU-T multimedia telephony systems.[1] There were further extensions of the standard that included adding five other new profiles intended primarily for professional applications, adding extended-gamut colour space support, aspect ratio indicators, defining two additional types of "supplemental enhancement information" (post-filter hint and tone mapping), and deprecating one of the prior FRExt profiles [1].The features which were added to the standard were Scalable Video Coding (SVC) and Multiview Video Coding (MVC).

In the Annex G of H.264/AVC [1], SVC allows the construction of bitstreams that contain sub-bitstreams that also form to the standard, including one such bitstream known as the "base layer" that can be decoded by an H.264/AVC which does not support SVC. For temporal bitstream scalability, i.e., the presence of a sub-bitstream with a smaller temporal sampling rate than the bitstream, complete access units are removed from the bitstream when deriving the sub-bitstream [1]. For this situation, high-level syntax and inter prediction reference pictures in the bitstream are constructed accordingly. For spatial and quality bitstream scalability, i.e. the presence of a sub-bitstream with lower spatial resolution or quality than the bitstream, NAL (Network Abstraction Layer) removed from the bitstream when deriving the sub-bitstream[1]. In this case, inter-layer prediction, i.e., the prediction of the higher spatial resolution or quality signal by data of the lower spatial resolution or quality signal, is typically used for efficient coding. The MVC enables the construction of bitstreams that represent more than one view of a video scene. The best example of this functionality is stereoscopic 3D video coding. There are two profiles developed in the MVC work: one which is supporting an arbitrary number of views and one which is designed specifically for two-view stereoscopic video. The Multiview Video Coding extensions were completed in November 2009 [1].

NAL and VCL layers

In the H.264/AVC standard there has been a distinction made between the VCL and NAL layers. The VCL carries the information of the encoding process, and before the transmission, it is mapped into NAL units. The NAL unit is the basic unit of organization in the NAL layer and contains the RBSP (Raw Byte Sequence Payload), "corresponding to video data or header information" [8]. This distinction was made so as to distinguish the coding features of a video from the transmission ones, by providing good adapting nature to every service. The NAL layer according to the standard's overview [9] "is designed in order to provide "network friendliness" to enable simple and effective customization of the use of the VCL for a broad variety of systems." The header information and format of the data "in a manner appropriate for conveyance by the transport layers or storage media" is specified [10].

The NAL unit is the basic structural element of the NAL layer. Each NAL unit contains a packet constituted by an integer number of bytes. The first byte is used the NAL's header and it is called header byte. And the rest bytes contain the payload data of the type which is indicated by the header. Interleaving of data is done for error protection. Specifically in byte-stream systems, the NAL unit is prefixed with three more bytes, signalling the start and end of each unit.

The Video Coding Layer in the H.264/AVC follows a block-based "hybrid of temporal and spatial prediction, in conjunction with transform coding" [9], like other video coding standards. The basic philosophy remains the same, despite many small improvements made in this layer. Each picture is represented and processed using block-shaped units called macroblocks. The macroblocks are associated with the luminance (luma) Y and the two colour (chroma) components, Cb and Cr11.

In the H.264/AVC, sampling format of these two components is used, because the human eye is more sensitive in the luma component, than in the chroma ones which is called 4:2:0. Hence we know that, the chroma components number is equal to the one fourth of the luma ones. So for each macroblock, according to Iain Richardson [9], "the source frame is represented by 256 luminance samples (arranged in four 8x8-sample blocks), 64 blue chrominance samples (one 8x8 block) and 64 red chrominance samples (8x8), giving a total of six 8Ã-8 block."

A group of sequential macroblocks that can be processed and decoded independently from any similar group, given that necessary information from a parameter set is called a slice. There are different types of slices in the H.264/AVC coding standard. Each of the types has its own characteristics and it can be used in a different position in the encoded stream to provide special properties when needed. The attribute that distinguishes the different type of each slice, is whether or not intra frame coding or inter frame coding was used. In intra coding only the spatial correlation if the picture is exploited, while in inter coding, temporal correlation is used. Hence the former or even following pictures can be used. The H.264/AVC standard coding types for the slices are given by Maria Salvat Perarnau in [11] :

I or "Intra" slices: These are coded using the Intra prediction. They are not referred to any previous slice of the video sequence and they only contain reference from themselves. The first frame of a sequence has to be Intra coded.

P or "Predicted" slices are coded using Inter prediction. This prediction creates a prediction model from one or more previously encoded video frames. This model is formed by shifting samples in the reference frame(s).

B or "Bi-predicted" slices are coded using Inter prediction with two motion compensated prediction signals per prediction block that are combined using a weighted average.

SP or "Switching P" slices permit an efficient switching between two different bit streams coded at different bitrates, without the large numbers of bits required by I slices. They are only supported by the extended profile.

SI or "Switching I" slices are encoded only using Intra prediction and allow an exact match with SP slices for random access or error recovery. They are only supported by the extended profile.

The type of coded picture which constitutes entirely from I or SI slices is called Instantaneous Decoder Refresh (IDR). The most important characteristic of IDR is that after the decoder receives the coded picture, all the reference buffers are initialized and the following pictures cannot utilize any more temporal information from frames prior to the IDR slice in the decoding process. There is another feature which is implemented in H.264/AVC that will allow higher flexibility in the way slices are constructed. This can be done by Flexible macroblock ordering (FMO). It allows macroblocks that are not contiguous to form a slice, using the concept of slice groups and macroblock to slice group maps. The picture is divided in areas, determined by a macroblock to slice group mapping, and each and every area is called a slice group. The slice group can be now partitioned into slices by following the usual procedure. The initial data of the picture to be encoded will end up interleaved inside the encoded slices, which increases the robustness and protection of the coded video.

Heterogeneous Handover in NS-2

IEEE 802.21 is developing standards to enable handover and interoperability between heterogeneous network types including both 802 and non 802 networks [23]. There is very limited support for vertical handover in NS-2 through the NIST add-on modules. There were numerous files added and changed in the NS-2 by NIST. Firstly, based on draft 3 of IEEE 802.21, there was a development of a new IEEE 802.21 add-on module [24]. The development of a new IEEE 802.16 add-on module [25], based on IEEE 802.16g-2004 standard [26] and the mobility extension 802.16e-2005 [27]. These both were updated in 2007. There was a development of a new Neighbour Discovery add-on module [28] for IPv6 with limited functionality. An update of the existing IEEE 802.11 MAC implementation was also introduced [29].

IEEE 802.21 Support

According to [2], the 802.21 add-on module contains an implementation of the Media Independent Handover Function (MIHF) is based on draft 3 of IEEE 802.21 specification.


Figure 8. MIH implementation in ns-2


The MIHF and MIH Users are implemented as Agents. An Agent is a class defined in ns-2 with the extension done by NIST, which will allow communication with both the MAC and MIH Users. Hence, it provides the mapping between the media independent interface service access point and the media-dependent interface [2]. Due to this mapping, the MIHF can send layer 3 packets to the remote MIHF and MIH User can register with the MIHF to receive events from local and remote interfaces. The MIHF is also responsible of getting the list and status of local interfaces and control their behaviour.

Figure 9. MIH User class hierarchy


We can see that MIH Users make use of the functionalities provided by the MIHF in order to optimize the handover process. MIH Users sends commands to the MIHF and receive events or messages. The Interface Management class (IFMNGMT) provides flow management functions which helps in finding the flows that needs to be redirected [2]. The IFMNGMT also receives events from the Neighbour Discovery (ND) agent when a new prefix is detected or when it expires [2]. The MIPV6 Agent adds the redirection capability to the MIH User. Whenever a flow needs to be redirected, a message can be sent to the source node to informing the new address. The Handover class computes a new address after a successful handover takes places.

Implementation of Nodes with Multiple interfaces

Nodes with multiple interfaces are not supported in ns-2, because the routing algorithms are different. Because of this, the NIST created the concept of multiFace node, which is a node who links to other nodes [2]. The multiFace node is viewed as a "supernode" because the other nodes are considered as an interface for the multiFace node. The multiFace node receives the events triggered by the interface nodes. The MIH Users on the multiFace node can register to receive these events. In order to detect layer 3 movement, each of the interface nodes will additionally run an instance of the Neighbour Discovery (ND) agent.

Figure 10. MultiFace Node


Integration of WiMAX

Table 1: Supported and unsupported features of WiMAX module in NS-2


Available Features

Features not implemented

Wireless MAN-OFDM physical layer with configurable modulation

Time Division duplexing (TDD)

Management messages to execute network entry (without authentication)

Default scheduler providing round robin uplink allocation to registered MobileStations (MSs) according to bandwidth requested

IEEE 802.16e extensions to support scanning and handovers

Fragmentation and reassembly of frames

Wireless MAN-OFDMA

Frequency Division duplexing (FDD)

(ARQ) Automatic Repeat Request

Service Flow and QoS scheduling

Periodic ranging and power adjustments


Error Correction

A WiMAX scenario has to include a "phantom" node because, when using WiMAX cells, there must be at least one mobile node with the range of the cell at the beginning of the simulation for proper functionality.

Figure 11. "Phantom" nodes


Integration of Wi-Fi

For the Wi-Fi module there were changes to the following features [29],

Modifications have been made to Backoff and Defer times. Even the Backoff is triggered only if there is no beacon to be send

The beacon messages have been added to the implementation

The Association Request and response frames have been added

There is a support for multiple access points and scanning

Integration of Information Services, Command Services, and Event Services

Table 2 : Supported MIH commands and events

MIH Commands

MIH Events

Link event subscribe

Link event unsubscribe

Link configure threshold

Link get parameters

MIH get status

MIH link scan

Link UP

Link down

Link going down

Link detected

Link event rollback

Link parameters report

Link handover imminent

Link Handover Complete

Power Boundaries in Wi-Fi and WiMAX cells

There are three variable that define the power boundaries to be used in the simulation [2].

CSTresh : it defines the minimum power level to sense a packet and switch the MAC from idle to busy.

RXTresh : the minimum power level to receive a packet without error.

Pr_limit : this value is always set equal or greater than 1. The higher the value sooner the event will be triggered.

Figure 12. Power Boundaries


Requirements and System design

The exact scenario of the project cannot be done in real-time as it requires a huge investment in time, money and complex design. Hence by using simulators we can predict the behaviour of a network without the need of an actual network. The simulation is conducted in Network Simulator 2 in windows operating system using cygwin software. The use of cygwin is to have a Unix-based environment and command-line interface for Microsoft Windows.

Network Simulator 2

NS-2 is a popular discrete event network simulator and is used in simulation of routing and multicast protocols. It is built in C++ and has a simulation interface through OTcl. It is an open source and free software. The simulator supports a class hierarchy in C++ and a similar class hierarchy within the OTcl interpreter. These two hierarchies are closely related to each other, there is a one-to-one correspondence between a class in the interpreted hierarchy and one in the compiled hierarchy. The class TclObject is the root of the hierarchy. Users can create new simulator objects through the interpreter where these objects are instantiated within the interpreter. They are closely mirrored by a corresponding object in the compiled hierarchy. The interpreted class hierarchy is automatically established through methods defined in the class TclClass. The user instantiated objects are mirrored through methods defined in the class TclObject. There are other hierarchies in the C++ code and OTcl scripts which are not mirrored in the manner of TclObject[13].

The Network simulator NS-2 has been chosen for simulating the entire network including, application, transport, network, data link and physical layers. The main advantage of NS-2 over other network simulators such as OPNET is that it is open source software and also the user can easily manipulate every single detail of each protocol and design. Even though OPNET provides the equal functionality, it uses its internal libraries that cannot be easily understood even for C programmers, because there is a need for a comprehensive knowledge about the simulator which is being used. The main disadvantage of NS-2 in comparison with OPNET is that it does not provide a user-friendly environment, while OPNET provides good graphic user interface with easy options to work in. However, when it comes to plotting capability OPNET can produces system-defined graphs, while in NS-2 the graphs can be defined by the user. However, in OPNET the plotting can only be done using the system defined graphs, while in NS-2 there are user-defined graphs.


Cygwin is a set of powerful tools to assist developers in migrating applications from UNIX/Linux to the Microsoft Windows platform [30]. Cygwin is a Unix-like environment and command-line interface for Microsoft Windows [1]. It delivers the open source standard Red Hat GNU gcc compiler and gdb debugger on Windows. It also provides for a standard UNIX/Linux development environment on Windows including APIs and command shells. The Cygwin.dll library delivers the interesting subset of UNIX SVR4, BSD, and POSIX APIs to access ports of UNIX/Linux applications to the Windows platform.

Figure 13. Screen shot of cygwin

Implementation and Testing


EvalVid is a framework and a toolkit for a unified assessment of the quality of video transmission [31]. It has a modular structure, making it possible to exchange at user discretion both the underlying transmission system as well as the codecs [31]. It is applicable to any kind of coding scheme, and can be used both in real experimental set-ups and simulation experiments. The tools which are implemented in pure ISO-C are for maximum portability[31]. There are two trace files which is used to find the interactions with the network. Hence it makes integration of EvalVid in any environments very easy.

Overview of EvalVid

The main components of EvalVid are Source, video encoder, Video sender (VS), Evaluate trace (ET), Fix Video (FV) and Video decoder. The source is a YUV QCIF or YUV CIF formats. The Video Sender component reads the compressed video file from the output of the video encoder, fragments each large video frame into smaller segments, and then transmits these segments via UDP packets over the simulated network [32]. The ET component creates a frame/packet loss and frame/packet jitter report and generates a reconstructed video file, which corresponds to the possibly corrupted video found at the receiver side as it would be reproduced to an end user. This is based on the original encoded video file, the video trace file, the sender trace file, and the receiver trace file [32]. The task of the Fix Video is to insert the last successful decoded frame instead of the lost from so as to do an error concealment.

Figure 14. The Frame work for Evalvid tool-set


Network Agents added by EvalVid

There are three simulation agents added which are MyTrafficTrace, MyUDP, and MyUDPSink. The MyTrafficTrace agent is used to extract the frame type and the frame size of the video trace file from the output of the Video Sender. It also fragments the video frames into smaller segments and sends it to the lower UDP layer at the user defined time. MyUDP agent allows users to specify the output file name of the sender trace file and also it records the timestamp of each transmitted packet, the packet id, and the packet payload size. MyUDPSink is the receiving agent for the fragmented video frame packets sent by MyUDP [31].

Figure 15. Interface between NS-2 and Evalvid


Simulation Scenario

In order to achieve our goals, we have a network topology of one WiMAX base station and a Wi-Fi base station. These two base stations are connected to a router1. The router1 is connected to router0. The router0 acts as a video server.

Figure 16. Node Configuration


The IEEE 802.21 Standard uses a significant amount of signalling which enables seamless handovers between heterogeneous networks. It also optimizes Layer 3 handover and provides continuous QoS across the networks. The implementation is also easy as there are thin software client on terminals. It also supports either client-controlled or network handovers. Using EvalVid framework the evaluation of the network simulation performance can be made easily. The calculation of delay, jitter and loss can be implemented using this tool-set. EvalVid is continuously being extended to many codecs and also to support voice over IP (VoIP). Even synchronised audio-video streaming is being implemented as an extension.