An Overview Of Streaming Multicast Techniques Computer Science Essay

Published:

Real time video distribution is performed through unicastand multicast. Unicast video distribution uses multiple point to pointconnections, while multicast video distribution uses point to multipointtransmission. Unicast and multicast are important building blocks of manyInternet multimedia applications such as videoconferencing, distance learning,multi-receiver video programs.

Multimedia applications are growing rapidly. Real time video distribution isan important IP multicast application. However, there is no guarantee forquality of service to real time video in current best effort networks becauseof the dynamic network conditions.

Bandwidth adaptability is the main requirement of real time videodistribution due to varying network conditions. Real time videodistribution has requirement of bandwidth adaptability, due to dynamic networkconditions. The flows which are non- adaptive to bandwidth suffer from:

  • Underutilization or over utilization of the available bandwidth
  • Unfairbandwidth allocation

The first type of deficiency degrades the video quality while the second oneshows the unfairness between the adaptive and non-adaptive traffic.

1.1. Real time videoadaptation approaches

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Video multicast approaches can be classified through two distinct properties(Jiangchuan Liu, Bo Li at al., 2003)

    1. Video rate: existingapproaches generally fall into two categories
      • Single rate
      • Multi rate
    2. The place whereadaptation is performed either at
      • End system (end-to-end)
      • Active service (intermediate network nodes)

Existing end system (end-to-end) approaches are classified according tothese properties such as (Jiangchuan Liu at al., 2003)

  • Single-rate adaptation (single-rate, end-to-end)
  • Simulcast (multi-rate, end-to-end)
  • Layered adaptation (multi-rate, end-to-end)

In Single-rate multicast, Sender transmits a video stream to a multicastgroup and set the transmission rate according to the feedback information(Jerome Vi at al., 2004) of receivers. Feedback implosion problem exist in thisapproach which can occur if there is a large number of receivers attempting toreturn feedback.

In Replicated stream video multicast, a sender transmits multiple streams ofsame data with different sending rates and video quality. In this categoryvideo streams replicate each other so it introduce redundant informationproblem.

In Layered multicast, video stream is divided intonumber of layers. One is called base layer and others are called enhancementlayers. Base layer contains the important features of video and enhancementlayers contain the additional features of video stream which may be used torefine the video quality.

There are two approaches for layered adaptation, one is Prioritizedtransmission (Jiangchuan Liu at al., 2001) and other is Receiver-driven (StevenMcCa and Van Ja at al., 1996). In the prioritized transmission, base layer hasthe highest priority and the enhancement layer has the lowest priority.

1.2. Problem statement

The main focus of this research activity isto analyze different Adaptive Real-time Video Multicast techniques and tocompare them in different network conditions.

According to initially survey, following issues have been identified in theexisting schemes of real time video distributions:

  • Rate Adaptation: In the same multicastsession, different receivers may have different processing capabilities becauseinternet is the heterogeneous network.
  • Network Congestion
  • Fair Bandwidth Allocation

Lot of research has been carried out to evaluate the performance of PLM(Puangpronpitag et al., 2008). There is no known study that evaluates theseprotocols in comparison. However, we evaluate these protocols in comparisonusing evaluation criteria such as TCP-friendliness, responsiveness, throughput,packet loss, end-to-end delay and jitter.

1.3. Thesis Goals

The main goal of this research activity is to evaluate theoretically andpractically adaptive real-time video multicast techniques. This research activitywill identify the characteristics of real-time video multicast protocols andwill identify the best adaptive technique for multicast streaming. This workwill try to provide some guidelines to enhance existing protocols and mayprovide suggestions to overcome the limitation of these protocols.

1.4. Research Methodology

In order to accomplish the goals stated in this thesis, a thoroughliterature survey of the existing work regarding the individual performanceevaluation of Adaptive Smooth Multicast Protocol (ASMP) and Packet-pairReceiver Driven Cumulative Layer Multicast (PLM) has been carried out and thesemechanisms have been implemented in NS2 simulator. The aim of the literaturestudy is to give a deep understanding about the ASMP and PLM mechanisms anddiscuss their limitations. This research has been limited to software basedsimulations. Simulations have been performed on Network Simulator 2 (NS2). NS2is used to generate the network traffic with different configurations.

1.5. Research contribution

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

The main focus of this research activity is to provide a performanceevaluation of Adaptive Smooth Multicast Protocol (ASMP) in comparison withPacket-pair Receiver Driven Cumulative Layer Multicast Protocol (PLM). Thisresearch activity shall enable us to determine the best technique for adaptivereal time video streaming multicast in varying network conditions. Thisresearch activity identified the limitations and capabilities of bothprotocols.

1.6. Thesis Organization

Rest of the thesis is organized as follows: In chapter 2 provides a surveyof adaptive real-time video multicast techniques, in chapter 3 discussed aboutmulticast congestion control protocols, chapter 4 defines the simulation setupand results and chapter 5 provides the conclusion and future work.

Chapter 2

SURVEY OF ADAPTIVE REAL-TIMEVIDEO MULTICAST TECHNIQUES

Over the internet some applications just like HTTP, FTP and SMTP are TCPbased while some applications just like video streaming, video conferencing andVOIP are non-TCP based. When the congestion occurs during the TCP transmission,then TCP reduces its transmission rate. Due to congestion, packets are dropped,therefore TCP adapts the sending rate according to an additiveincrease/multiplicative decrease algorithm (AIMD). While in non TCP, data iscontinued to be transmitted at its original rate because it is run over theUDP. So if TCP and non TCP traffic is sent over the same link, then it willlead to unfairness problem in case of congestion. Unfairness problem can beovercome, if apply some mechanism over non-TCP traffic which behaves just likeTCP congestion control mechanism.

2.1. Multicast CongestionControl

Multicast congestion control has two parts, congestion detection and fairadaptation of transmission rate.

Congestion is detected when packets are not received at the receiver side ata required play-out time. Packet losses occur when network in congestioncondition. Rate adaptation and rate control is used to determine the sendingrate of video traffic according to the estimated available bandwidth.

2.1.1. Rate control

Rate control is a mechanism to adjust the transmission rate of a videostream according to the estimated network bandwidth. Rate control can beclassified into three categories (Dapeng Wu at al., 2001) such as

  • Source based rate control
  • Receiver based rate control
  • Hybrid rate control

In source based rate control, data is transmitted only at single rate to allheterogeneous receivers so the sender plays an active role and adapts the datatransmission rate according to the feedback information from receivers. Whilein receiver based rate control, the receiver is responsible to adapt the videotransmission rate by subscribing and unsubscribing various layers. In hybridrate control both sender and receiver regulate the rate of video stream.Sender adapts the video stream transmission rate on the basis of receiver'sfeedback information while receiver adapts the video stream rate byadding/dropping layers. Fig. 1 illustrates the classification of rate controlmechanisms.

Congestion control can be handled by using two approaches End-to-EndCongestion Control and Router-Based Congestion Control.

2.1.2. End-to-End versusRouter-Supported

Congestion control schemes which do not utilize additional router mechanismsare called end-to-end congestion control. The end-to-end congestion controlmechanism helps to reduce packet loss and delay. These schemes are designed forbest-effort IP networks and these can be further divided into Sender-basedapproach and Receiver based approach.

The approaches which utilize the additional router mechanisms are calledrouter supported congestion control mechanisms. The additional networkfunctionalities are

  • Modification of the router's queuing strategies
  • Feedback aggregation
  • Hierarchical round-trip time measurements
  • Management of groups of receivers

2.1.3. TCP-friendlycongestion control

For non TCP traffic, some mechanisms are defined which enable the non TCPtraffic to perform the same rate-adaptation mechanism as rate-adaptationmechanisms of TCP traffic. Therefore these rules and mechanisms make thenon-TCP traffic TCP friendly. The congestion control mechanism is used to makesure that the bandwidth is fairly shared among the applications when thenetwork is overloaded.

Through multicast, data is transmitted from one sender to multiplereceivers. Thus, it is an efficient way to transmit data. There are twocategories of multicast congestion control.

  • Single rate
  • Multi-rate

2.1.3.1.Single-rate multicast congestion control Protocols

In single rate multicast congestion control protocols, sender controls thetransmission rate while all the receivers receive the data at the same rate.Loss Tolerant Rate Controller (LTRC) (T. Montgomery, 1997) and ScalableTCP-like Congestion Control for Reliable Multicast (MTCP) (Injong Rh at al.,1999) are the single rate protocols.

2.1.3.2. Multi-rate multicast congestion control Protocols

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

In multi-rate multicast congestion control protocol, data stream istransmitted in the form of layers. Receivers join the layers and receive thedata according to their estimated available bandwidth capacity in the form ofmulticast group. Layered Video Multicast with Retransmission (LVMR) (X. Li atal., 1998) and Multicast enhanced Loss-Delay based Adaptation algorithm (MLDA)(D. Sisalem and A. Wolisz, 2000) are Multi-rate protocols.

Single-rate and multi-rate protocols can be further classified into twocategories:

  • Rate based approaches
  • Window-based Approaches

In rate based approaches, the sending rate isadapted dynamically when congestion occurs and rate is calculated by using TCPanalytical model.

Rate control mechanisms adopt two approaches (Dapeng Wu Yiwei at al., 2001)first: Probe based approach, second: Model based approach. In probe basedapproach, sender probes for available network bandwidth. Therefore it is basedon probing experiments. As defined in (Dapeng Wu Yiwei at al., 2001), there aretwo ways to adjust the sending rate:

  • Additiveincrease and multiplicative decrease
  • Multiplicativeincrease and multiplicative decrease

In window-based congestion control schemes, congestion windows aremaintained at both sender side and receiver side. It works just like a TCPcongestion window. If congestion occurs then window size decreases otherwisecongestion window increases. Random Listening Algorithm (RLA) (Wang andSchwartz, 1998) and Multicast TCP (MTCP) (Injong Rhee at al, 1999) are Windowsbased protocols.

2.2. Desirablecharacteristics of a Rate-based Multicast Congestion Control Protocols

Some desirable characteristics of a multi-rate congestion control protocolare mentioned below.

2.2.1. Network utilization

A good protocol should be able to quickly detect the available bandwidth andadd more layers, and if bandwidth is not available then drop the layers.

2.2.2. Responsiveness

A good protocol should have the property of responsiveness because whencongestion detection algorithm detect the congestion then it reduce its ratevery quickly and to avoid packet loss occurrence.

2.2.3. Packet loss

When network becomescongested then packet losses occur that degrade the video quality, which isbasically waste of bandwidth.

2.2.4. Fairness

There are two dimensions of fairness

  • Intra-protocolfairness
  • Inter-protocolfairness.

2.2.4.1.Intra-protocol fairness

In intra-protocol fairness, all the receivers behave fairly with each otherwithin a multicast session.

2.2.4.2.Inter-protocol fairness

In inter-protocol fairness, bandwidth distribution is fairly betweendifferent sessions.

2.2.5. Scalability

Scalability problems occur mainly due to the feedback control messages whichare sent back from receiver to sender.

2.2.6. Fast convergence

In fast convergence, protocol should be able to converge its transmissionrate fatly and achieve its optimal rate according to the heterogeneous networkconditions.

2.3. Single-rate MulticastCongestion Control Protocols

In single rate multicast congestion control protocols, all the receiversreceive the data at the same rate and the source controls the transmissionrate. Therefore in single-rate multicast protocols, all receivers estimate thebandwidth according to their algorithms and then send back their estimated rateto sender. The sender set the transmission rate close to the lowest rate whichit has been received from the receivers. In this scheme, source generates thesame data at same rate.

2.3.1. TCP-Friendly Multicast Congestion Control (TFMCC)

TCP-Friendly Multicast Congestion Control (J. Widmer, M. Handley, 2006), itis a source based single-rate multicast congestion control protocol. It isbased on the unicast TCP-Friendly Rate Control mechanism (TFRC) (M. Handley, S.Floyd and J. Padhye, 2003). Congestion control algorithm is running at thereceiver side and each receiver continuously calculates the receiving rate.Rate is calculated through the analytical model of TCP and selected receiverssend the feedback packets back to the sender containing the desired rateinformation. The working of this protocol is mentioned below

Each receiver measures the following values

  • RTT (round trip time)
  • Loss event rate

· TCP-friendly sending rate according to the analyticalmodel of TCP J. Padhye.et.al. (1998)(Eq 2. 1)

Where

X = transmit rate in bits/second

s = packet size in byte

R = round-trip time in seconds

P = loss event rate

  • All receivers send its calculated sending rateto the sender by using receiver's feedback report. This protocol is used thefeedback suppression algorithm in a distributed manner because it avoids thefeedback implosion at the sender side. Through this algorithm only select thesubset of receivers which are allowed to send the feedback report.
  • The sender selects that receiver from thenumber of receivers which contains the lowest rate, so that receiver is calledCurrent Limiting Receiver (CLR).
  • Rate increasing and decreasing is mainlydepends on CLR report. If its calculated rate is less than the current sendingrate, then sender reduce its sending rate otherwise increase it.

Therefore it provides the scalability and responsiveness under a wide rangeof network conditions

2.3.2Explicit Rate Multicast Congestion Control (ERMCC)

Explicit Rate Multicast Congestion Control is the single-rate multicastcontrol scheme (Jiang Li, Murat Y at al., 2006). It is based on TRAC(Throughput Rate At Congestion). In this scheme, the source selects theCongestion Representative (CR) which is the slowest receiver. Feedback of onlythe selected Congestion Representative (CR) is considered at the sender sideand transmission rate is selected according to the capacity of this receiver.

When any receiver detects congestion, then it calculates the average TRACand updates it through the exponentially weighted moving average (EWMA)technique. Receivers having lower average TRAC than average TRAC of CR send thefeedback. Therefore ERMCC performs feedbacks suppression very efficiently.ERMCC overcomes the well known existing problems such as TCP-friendliness,slowest receiver tracking, drop-to-zero and feedback suppression mechanism.

2.4. Multi-rate MulticastCongestion Control Protocols

The multi-rate, rate based congestion control schemes are very flexible inadapting the rate according to the varying network conditions. Therefore theyallow the receivers to receive the stream at different rates. So, multi-ratemulticast protocols estimate the available network bandwidth according to theiralgorithms. In these schemes, source generates the data over different multiplemulticast streams at different rates. Each receiver takes rate adaptationdecision, so the burden of congestion control is at the receiving side.

2.4.1.Receiver Driven Layered Multicast (RLM)

RLM is a receiver driven multicast protocol proposed by MaCanne al. (StevenMcCa, Van Ja at al., 1996). It uses layer coding and receiver driven approach.The sender does not play any active role thus it is a receiver driven approach.It is the first layered based multicast protocol which is proposed for videotransmission and it try to overcome the heterogeneity problem. RLM uses thejoin experiment to adjust the receiving rate according to the network status.In this scheme, when congestion occurs, the receiver drops a layer anddecreases its receiving rate. Incase, spare capacity is available, it adds alayer and increases its reception rate. Join experiment scheme introducespacket losses due to congestion and effects the video quality.

Join experiment introduces intra-session unfairness within the sessions. Toovercome this problem, synchronized join experiment is proposed. In whichsynchronized control messages are used. However these messages introduce thescalability problem. There is also another problem which is called IGMP(Internet Group Management Protocol) leave latency problem. In this problem, toovercome the congestion problem RLM receivers unsubscribe the layers and it cantake several seconds to complete this process which slows down the bandwidthrecovery.

2.4.2Receiver-driven Layered Congestion Control (RLC)

RLC was proposed (S Mc Canne, 1996) to address RLM problems. RLM has aproblem that it cannot support reliable multicast applications such as filetransfer applications. RLC solves this problem by introducing FEC encoding forreliable multicast applications. It is not only applied to multimediaapplication but also applied on reliable multicast applications.

RLC congestion control mechanisms also adapt the receiving rate according tothe network status using

  • Synchronized Join Experiment
  • Burst Test techniques.

There are number of schemes for rate adjustment but RLC uses Double IncreaseMultiplicative Decrease (DIMD) scheme. When adding a layer it will double therate and when dropping the layer it will multiplicatively decrease the rate.Due to packet losses, congestion occurs. When congestion occurs then thereceiver tries to overcome the congestion, so receiver drops the higher layer.Due to IGMP leave latency problem, it can take several seconds. Therefore IGMPleave latency problem slows down the unsubscribing process and as a resultcongestion can persist.

2.4.3Sender Adaptive and Receiver Driven Layered Multicast (SARLM)

Sender adaptive and receiver driven layered multicast (SARLM) for scalablevideo was proposed in (Qian Zh at al., 2005).It improves the drawbacks ofreceiver driven layered multicast (RLC) protocol. RLC did not address how toadapt sending rate according to the varying network conditions. It is presentedfrom its name that this protocol is based on the sender adaptive and receiverdriven approach. The key mechanisms of SARLM are Receiver Based Packet PairProbes (RBPP) and scalable feedback which is based on gamma-distributed randomtimer.

Instead of relying on a join-experiment technique like RLM, RLC and FLID-DL,SARLM uses a RBPP approach to infer the available bandwidth and avoidcongestion. In this scheme, sender probes for network bandwidth by sending threepackets-pair. For bandwidth estimation packet pair approach is used.Periodically feedback request is sent from sender to receivers, all receiverssend the feedback to sender within a short period of time; receiver's feedbackreport contains the information about receiver's network conditions. Thus, thesender adapts the sending rate after analysis the feedback message. Therefore,SARLM provides the efficient bandwidth utilization and network congestionavoidance.

Join policy:

Add layer if the minimum estimated bandwidth is greater than cumulativebandwidth during the period C (e.g. C=1 second)

Leave policy:

Drop a layer each time when estimated available bandwidth BW is lower thanthe cumulative bandwidth BWn.

For probing, control packets are not used, instead only data packets areused. SARLM provides both leave and join synchronization.

SARLM assumes the deployment of a fair queuing mechanism in routers andrelies on a fair scheduler to ensure fairness, including intra-protocolfairness and inter-protocol fairness. The advantages of SARLM are

  • It has a faster convergence for rate adaptation
  • It does not require packet losses to estimate the available bandwidth
  • It also provides fairness in both dimension such as intra-protocol fairness and inter-protocol fairness
  • To avoid feedback implosion problem

SARLM overcomes the weakness of PLM, which does not use the noise filteringmechanism and does not accurately estimate the actual bandwidth. Whereas, SARLMuses the noise filtering algorithm, hence it evaluates the correct estimationof bandwidth.

Table 2.1 summaries the characteristics of multi-rate multicast congestioncontrol protocols.

Protocols

Network Support

Sender driven/receiver driven

Smoothness of rate

Feedback Signaling

RLM

End-to-end

Receiver driven

layer-dependent

No

RLC

End-to-end

Receiver driven

layer-dependent

No

SARLM

End-to-end

Sender and Receiver driven

Smooth

Yes

Table 2. 1: Characteristics ofmulti-rate multicast congestion control protocols

Chapter 3

MULTICASTCONGESTION CONTROL PROTOCOLS

This chapter discusses twomulticast congestion control protocols for Multimedia Data Transmission:Adaptive Smooth Multicast Protocol (ASMP) and Packet-pair receiver-drivencumulative Layered Multicast.

3.1. Adaptive SmoothMulticast Protocol for Multimedia Data Transmission (ASMP)

ASMP was proposed by (Christos Bo, Apostolos Gk and Georgios Ki, 2008). Itis a single-rate multicast transport protocol which is used for multimedia datatransmission. It runs on top of UDP/RTP/RTCP protocols. In ASMP, sender andreceiver share current information about network conditions through the use ofRTCP sender and receiver report. In sender driven congestion control protocols,sender adjusts its transmission rate. ASMP is the sender driven protocol, soASMP sender adjusts its sending rate according to the receiver's feedbackreports. Receiver's feedback reports contain the receiving rate, which iscalculated at each receiver according to the TCP analytical model (J. Pandhye,J. Kurose, D. Towsley, R. Koodli1999).

(Eq 3. 1)

Where

P = Packet size in bytes

l =packet loss rate

t= retransmission timeout

t RTT = Round Trip Time (RTT)

Each receiver measures the following values such as packet loss rate, RoundTrip Time, delay Jitter and Congestion Indicators (CI) using the earlycongestion indication algorithm before the calculation of new TCP-friendlytransmission rate. After calculating the current transmission rate, eachreceiver sends it to sender by using the RTP/RTCP extensions. So, senderreceives new calculated receiving rate through receiver's feedback report andthen adjusts the sending rate keeping in consideration to the slowest receiversin the session. The main features of this protocol are

  • Smooth transmission rates
  • TCP-friendly behavior
  • High bandwidth utilization

An advantage of this protocol is that it does not require any additionalsupport from the routers or the underlying IP-multicast protocols. Adisadvantage of this protocol is that, it cannot show very responsive behaviorin varying network conditions because the gap between two successive RTCPfeedback reports is very long.

3.2. Packet-pairreceiver-driven cumulative Layered Multicast (PLM)

Packet-pair receiver-driven cumulative Layered Multicast (PLM) was proposedby (A. legout and E.W. Biersack, 2000) to address some deficiencies of ReceiverDriven Layered Congestion Control multicast protocol (RLC) and improve it. Itis the multirate multicast congestion control protocol which is used formultimedia data transmission such as live audio/video. It runs on top ofUDP/RTP protocols. In receiver driven congestion control protocols, thereceiver is responsible to adapt the video transmission rate by subscribing andunsubscribing various layers. Therefore Receiver has the active role while thesender has the passive role in adapting receiver based rate control. PLM is thereceiver based congestion control mechanism, so the congestion controlalgorithm is performed at the receiver side. Whereas at the sender side data istransmitted via cumulative layers and each layer packets send out in the form ofpairs.

PLM defines two basic mechanisms:

  • Receiver-sidePacket-pair Probe (PP)
  • FairQueuing(FQ)

Receiver-side Packet-pair Probe (PP) mechanism is used to estimate thecurrently available bandwidth and Fair Queuing (FQ) mechanism is used at eachrouter. In Packet-pair Probe (PP) mechanism, bandwidth is measured:

Bw = Ps /Inter-arrival gap(Eq 3.2)

Where

Ps= packet size

Inter-arrival gap = interval between two consecutive packet pair

PLM sender sends the data in the form of packet pair (or back to back) andthe receivers estimate the currently available bandwidth for that flow by usingthe packet size and the interval time between two packets. PLM receivers areresponsible to add/drop layers according to the calculated bandwidth because itthe receiver driven protocol. Layers subscription and un-subscription areperformed only at every regular Check Period interval.

Initially, receivers join the session and wait for base layer packet pair ifone packet from packet pair is not received during the specified timeperiod then it is consider that there is not sufficient bandwidth, so receiverleave session. If first pair packet receive during the specified time period t then there is set the check period which is

Tc = t + C(Eq 3.3)

Where

C = 1 seconds (i.e. check value)

t = time at which first packet received

When Tc expires then checks that whether layers are subscribed orun-subscribed. Layers subscription and un-subscription are performed throughthe congestion detection algorithm. In this algorithm layers are add whenestimated network bandwidth is greater the subscribed layer bandwidth otherwisedrop the layers until estimated network bandwidth is less than or equal to thesubscribed layer bandwidth.

PLM assumes fair scheduler network and deploy a fair queuing mechanism atrouters. It relies on a fair scheduler to ensure fairness, includingintra-protocol fairness, inter-protocol fairness and TCP friendliness. PLM hassome advantages over RLM and RLC. RLM and RLC produces losses at join attempts,whereas PLM does not induce any loss to discover the available bandwidth. Ithas fast convergence for rate adaptation.

PLM suffer from several problems (?), which are mentioned below.

First problem is packet loss .There is no filter mechanism which is used toavoid noise packets. Sometimes the obtained available bandwidth is higher thanreal bandwidth. So PLM will add more layers and the bottleneck link mayoverflow and the packets will drops.

Second is network under-utilization problem. Due to lack of filteringmechanism, sometimes the estimated bandwidth is less than the actual bandwidth.Therefore it adds fewer layers than the available bandwidth. However thenetwork remains underutilized.

Third is Join-collision problem. If two receivers detect the availablebandwidth at the same time and both of them join, the actual bandwidth will notbe enough for both. Therefore, one or both of them may fail and one or both ofthem have to leave the layer(s).

Chapter 4

SIMULATIONSETUP AND RESULTS

This chapter describes thesimulation setup used for the evaluation of multicast congestion controlprotocols and their analysis based on the simulation results.

4.1 Simulation Setup

Table 4.1 lists the simulation parameters in detail.

OS

Fedora Core 9 64bit

CPU

Intel(R) Celeron(R) M 1.50GHZ

RAM

1 GB

NS-2 Version

2.33

PLM Implementation

NS-2 default

ASMP Implementation

asmp V1.1

NsWorkBench

nsBench v1.0

Data Packet Size

500 Bytes

Total Simulation Time

250ms

Table 43. 1: Simulation Setup

For simulations, ns simulator version 2.33 was used and integrated with theavailable code of PLM (built-in) and ASMP (asmp V1.1) in ns2. Simulationtopology was created in NsWorkBench (nsBench v1.0).

4.2 Simulation Parameters

For simulation experiment two multicast protocols are used which are PLM andASMP. For experiment, TCP and UDP connections were also simulated. For TCPconnection, TCP Reno was used and applications on top of TCP source were infiniteFTP sessions. The maximum size of TCP congestion window was set to 4000packets. The applications on top of UDP source were CBR sessions. The packetsize of all flows (PLM, ASMP, TCP and CBR) was chosen to be 500bytes.

In PLM, following default parameters were set during the simulation. Theseparameters were set according to recommended values in (A. Legout and E.W.Biersack, 2000). Queuing scheme used was fair queuing with size of 20 packetsfor each flow and check period was set to 1 second. After this check period,receiver checks whether to add or drop layers.

Parameters

Default Values

Queuing Scheme

FQ

PP Burst Size

2

PP Estimation Length

3

Plm Debug Flag

2

Blind Period

0.5 seconds

Check Period

1 second

Table 4. 2: PLM default parameters

In ASMP, following default parameters were set during the simulation. Theseparameters were set according to recommended values in (Christos Bouras,Apostolos Gkamas, 2009). In this experiment, queuing scheme used was Random EarlyDetect (RED). RTCP report interval was set to 1000ms.

Parameters

Default Values

Queuing Scheme

RED

Report Interval

1000ms

Table 45. 3: ASMP default parameters

4.3 Performance Metrics

End user requires the video quality with minimum cost and maximumavailability. But for real time applications, best-effort Internet does notprovide any quality of service guarantees. Video streaming applications haverequired high throughput, low packet loss and delay, because of its real timenature. Various factors that are crucial for real time applications (i.eaudio/video conferencing) and non-interactive real time applications (i.eaudio/video streaming) are sensitive to packet loss, end-to-end delay andjitter. The following metrics are used in this thesis to evaluate theperformance of ASMP and PLM protocol.

4.3.1 Throughput

The number of successful data packets in a unit of time is calledthroughput. According to (Somnuk Puangpronpitag and Roger Boyle, 2003),the smoothness or oscillation of throughput with time, can show the stabilityof rate adaptation mechanisms. Therefore a good protocol would give highthroughput.

4.3.2 Throughput- Jain'sFairness Index

Jain's Fairness Index (R. Jain at al., 1984) equation is used to quantifythe fairness of a congestion control mechanism. Jain's Fairness Index resultranges from 1/n (worst case) to1 (best case). Jain's Fairness Index equation isdefined asfollows:(Eq 4. 1)

Where

n = number of flows

x1, x2…xn = set of flow throughput

4.3.3 Packet Loss Ratio

The Packet Loss Ratio is defined as the ratio of the data packets lost tothe total number of packets transmitted. The following equation is used tocalculate the PLR:

Packet Loss= (Packet drop / Packet tx )*100(Eq 4. 2)

Where

Packet drop = Total number of packets drop

Packet tx = Total number of packets transmitted

4.4 Simulation Results

In the following sections, the simulation results of all scenarios arepresented. As mentioned, two protocols, ASMP and PLM, have beeninvestigated.. The performance evaluation criteria are:

  • Fairness of each protocol towards TCP traffic
  • Responsiveness of each protocol under changing network conditions
  • Quality of Service Parameters
  • Throughput
  • Packet Loss

The bottleneck link from R1 to R2 has the bandwidth 600 Kbps and delay of8ms. All exterior links have 10Mbps bandwidth and delay of 8ms. Each simulationwas run for 250 seconds. Data rate of TCP/CBR and multicast protocol is500Kbps. There is one multicast session (ASMP/PLM) and three TCP/CBR sessions.

4.4.1Experiment I: TCP-friendliness Test

4.4.1.1Objectives of Simulation Scenario

The main objective of this scenario was to compare fairness of ASMP and PLM.In this scenario it was assumed that fair queuing exist at each router.The fairness of PLM protocol was also checked when fair queuing does not existat router. The topology used is shown in Figure 4.1, shared between three TCPconnections and one multicast (PLM or ASMP) session. At the beginning of thesimulation, multicast (PLM, ASMP) session starts. At time 30 second TCP1 startstransmission and at 140 seconds, the session is terminated. At time 60 secondsTCP2 starts and at 170 seconds, it is terminated. At 90 seconds TCP3 starts andat 200 seconds, it is terminated. Results show that when TCP1, TCP2 and TCP3starts their transmission, multicast flow (PLM, ASMP) decreases its throughputand when TCP connections stops their respective transmission, multicastflow (PLM, ASMP) increases its throughput. The following graphs show it morecomprehensively.

Figure 4.2 shows the simulation results of PLM (with FQ). Results show that PLMhas always consumed less bandwidth, as compared to TCP. It performs fastconvergence, fairness with TCP flows and efficient bandwidth utilization quitewell. Figure 4.3 shows the simulation results of ASMP. Results show that ASMPhas always consumed less bandwidth, as compared to TCP. ASMP behaves moreTCP-friendly, responsively and efficiently in sharing bandwidth with TCPsessions. Therefore PLM shows smooth fairness because PLM uses FQ queuing atrouter. Whereas ASMP also performs fast convergence and it is comparativelyfairer than PLM.

In the real environment, it's not possible to implement fair queuing at eachrouter, so, PLM may be used without fair queuing.

Figure 4.5 show the results of PLM without fairqueuing. The main objective of this graph is to study the TCP-friendlinessbehavior of PLM when there is no fair queuing at router. Drop-tail queuingscheme was used at router. The results were taken with Drop-tail and RED(random early detection) queuing scheme. Both queuing scheme show the sameresults. Above result shows that PLM cannot maintain fairness towards TCP inthe absence of FQ. Because PLM relies on pp (packet-pair) scheme to detectavailable bandwidth and adapts its rate accordingly. PLM without FQ alsobehaves in a less TCP-friendly manner. So, PLM is TCP-friendly only when FQ isused, whereas ASMP is not sensitive to the disappearance of FQ.

4.4.1.2Throughput- Jain's Fairness Index Measurements

Jain's Fairness Index of ASMP =0.97

Jain's Fairness Index ofPLM = 0.99

In this simulation, value of Jain's Fairness Index of PLM protocols and ASMPprotocols was obtained from the calculation of equation (4-1).The Jain'sFairness Index of PLM is higher than the ASMP which shows that availablebandwidth is better distributed between multicast protocol PLM and TCP traffic.

4.4.2 Experiment II:Responsiveness to Network Condition

4.4.2.1Simulation Scenario and Objectives

The main objective of this scenarios is to analyze and compare the multicastprotocols (ASMP, PLM) with respect to responsiveness and packet loss ratio,when the available bandwidth changes.

The network topology used in this scenario is same as discussed in Figure4.1. A single multicast (ASMP or PLM) flow was used with three CBR flows (i.e.join and leave with different timings) across a bottleneck link (between routerR1 and router R2) with 600Kbps of bandwidth and 8 milliseconds of delay. Eachexterior link is set to 10 Mbps of bandwidth and 8 milliseconds of delay.

From the above graphs it is observed that PLM is more responsive as comparedto ASMP, because the rate adaptation scheme of PLM takes only 1 second toadjust its rate in varying network conditions. On the other hand, ASMP takesapproximately 15 seconds. It is because the rate adaptation scheme of PLM,which allows multiple subscriptions and un-subscriptions at every check period.While in ASMP, it does not react fast in varying network conditions, becausethere is a long time interval between two consecutive RTCP feedback reports. Ittakes several seconds to reach its maximum level. So, ASMP is “slower” than PLMin responding to network congestion.

Table 4.4 shows the results of packet loss ratio (PLR) and averagethroughput. 3% packet loss in multimedia transmission presents the 30%reduction in video quality (Christos Bo, 2009). In this scenario, the packetloss ratio of PLM protocol is 0% which represents the 0% reduction in videoquality. Packet loss ratio of PLM protocol is 0%, because PLM receivers useconvergence algorithm to use bandwidth efficiently and to avoid congestion. Inconvergence algorithm adapts subscription level. Where as in ASMP packet lossratio is 3% because, it has higher feedback intervals for RTCP reports. Averagethroughput of PLM is greater than the average throughput of ASMP. So on thebasis of these results, it can be mentioned that performance of video qualityin PLM is better than ASMP.

Scenario II

Metrics

ASMP

PLM

PLR

3%

0%

Avg. Throughput

205Kbps

260Kbps

Table 46. 4: Results

4.4.3 Experiment III:Heterogeneous Network

In this scenario, the topology used is represented in Figure 4.1, withheterogeneous multicast receivers. Different multicast receivers have differentlink capacity, such as: multicast receivers1 (MR1) at 2Mbps, multicastreceivers2 (MR2) at 1Mbps, multicast receivers3 (MR3) at 512Kbps, multicastreceivers4 (MR4) at 256Kbps, multicast receivers5 (MR5) at 128Kbps andmulticast receivers6 (MR6) at 64Kbps. Its main purpose is to test theefficiency of congestion control mechanism.

This simulation was run for 350 seconds. At the beginning of the simulation,multicast (PLM, ASMP) receiver MR1 joins the session. At time 40 seconds, MR2joins, at time 80 second MR3, at time 120 second MR4, at time 160 second MR5and at 200 seconds MR6 joins the session. TCP connections are run to generatebackground traffic with different joining time, such as TCP1 joins at 30sec,TCP2 joins at 60sec and TCP3 joins at 90sec and different leaving timesuchas TCP1 leaves at 140sec, TCP2 leaves at 170sec and TCP3 leaves at 200sec. Therewas no background traffic between 200sec to 350sec time interval.

In this simulation, at time 0sec PLM sender starts its data and MR1 joinsit. During time 1sec to 29sec, MR1 receives data at its maximum rate, i.e.approximately 550Kbps to 600Kbps, because R1 to R2 has the link capacity600Kbps. At 30th sec, TCP1 connection starts its transmission. At that time PLMprotocol reduces its throughput 60Kbps to 300Kbps, because PLM has the propertyof TCP-friendliness. At time 40 seconds, MR2 joins the session and achievesmaximum throughput (approximately 300Kbps). After that, TCP2 connection startsits transmission at time 60 seconds. At that time PLM receivers MR1 & MR2reduce their throughput 180Kpbs to 200Kbps. Because at the same time there arethree connections, it must share the bandwidth of the congested link between therouters R1, R2 which is 200Kbps for each connection. The same behavior isperformed during 80 seconds to 139 seconds when the remaining nodes, MR3 andMR4, join the session and TCP3 connection starts its transmission. When thetransmission of TCP1 traffic stops (140th second), the PLM receivers graduallyagain reserve the available bandwidth. During the time 141 seconds to 200seconds, same behavior is adapted when the remaining TCP2, TCP3 stop (170th& 200th second) their traffic. During the simulation 200 seconds to 350seconds, there is no background traffic. Only PLM session was running.

It is obvious from Figure 4.7 that PLM mechanism has "friendly"behavior to TCP traffic and good behavior during network congestion condition.When the transmission of TCP traffic starts, the PLM receivers reduces itsreceiving rate and when the transmission of TCP traffic stops, the PLMreceivers again reserves the available bandwidth.

In this simulation, at time 0 seconds, ASMP sender starts its data and MR1(1sec) joins it. During time 1 second to 29 seconds, MR1 receives data at itsmaximum rate, i.e. 600Kbps, because R1 to R2 has the link capacity of 600Kbps.At time 30 seconds, TCP1 connection starts its transmission. At that time, ASMPprotocol reduces its throughput 600Kbps to 300Kbps, because ASMP has theproperty of TCP-friendliness and receivers prefer smaller transmission ratesdue to congestion condition and the ASMP sender releases bandwidth for TCPtraffic to use it. At time 40 seconds, MR2 joins the session and achievesmaximum throughput, approximately 270Kbps. Because the ASMP receivers transmitRTCP receivers reports with the use of RTCP adaptive feedback mechanismand the ASMP sender updates its sending rate according to the RTCP receiversreports and runs the update sender rate algorithm every 1000ms. Afterthat, TCP2 connection starts its transmission at time 60 seconds. At that timeASMP receivers, MR1 & MR2 reduce their throughput 270Kbs to 60Kbps. Thesame behavior is adopted during 80 seconds to 139 seconds, when the remainingnodes, MR3 and MR4 join the session and TCP3 connection starts itstransmission. When the transmission of TCP1 traffic stops (140th second), theASMP receivers gradually reserves the available bandwidth again. Duringthe time 141 seconds to 200 seconds, same behavior is adopted when theremaining sessions, TCP2 and TCP3 stop (170th & 200th second) theirtraffic. During the simulation, 200 seconds to 350 seconds, there is nobackground traffic. Only ASMP session is running. At that time, sending rate isaccording to more than average value of the lowest receivers MR5 & MR6.Therefore ASMP sender finds the transmission rate that satisfies most of thegroup of receivers.

4.4.3.1 Throughput

There are two different situations. In situation1, there is no backgroundtraffic and only multicast session (ASMP/PLM) is run. Figure 4.9 shows theaverage throughput of multicast session (ASMP/PLM) during the simulation 200secto 350sec. While in situations2, there is a TCP session as background trafficand different multicast receivers join with different timings. Figure4.10 shows the average throughput of multicast sessions (ASMP/PLM) during thewhole simulation time.

In both situations, throughput of PLM is greater than the ASMP. This isbecause PLM uses the pp (packet pair) technique to measure the availablebandwidth and FQ (Fair Queuing) to enforce fairness. While ASMP uses theanalytical model of TCP to measure TCP-friendly bandwidth shares. Thus, PLMperforms better than ASMP.

4.4.3.2 Packet Loss

Figure 4.11 shows the packet loss of multicast session (ASMP/PLM) during thesimulation 200 seconds to 350 seconds. Graph shows that ASMP receivers, such asMR1 (2Mbps), MR2 (1Mbps), MR3 (512Kbps), and MR4 (256Kbps) has zero packetloss. Whereas, MR5 (128Kbps) and MR6 (64Kbps) have 0.07% and 0.20 % packet lossrespectively. Because ASMP is the single rate multicast congestions controlprotocol which adapts single rate on the basis or RTCP receiver's feedbackreport. Figure 4.8 shows that when different ASMP multicast receivers join thesession with different link capacity, and then the ASMP sender reduces itstransmission rate according to minimum link capacity receivers. Therefore thereceivers prefer smaller transmission rates due to congestion condition and thesender reduces its transmission rate approximately to 135Kbps and keeps thistransmission rate for the next 150 seconds. Because transmission rate isapproximately 135Kbps for the simulation time 200 seconds to 350 seconds, soall the receivers (MR1, MR2, MR3 and MR4) which have the highest link capacitythan the transmission rate achieve maximum throughput 135Kbps and no packetloss. Whereas MR5 and MR6 have the lowest capacity than the transmission rate,so these two receivers encounter packet loss due to congested links.

Figure 4.11 shows the PLM packet loss. PLM receivers MR1 (2Mbps), MR2(1Mbps), have the same packet loss because the congested link capacity R1 ,R2is 600Kbps and granularity of each layer is 20Kbps. PLM is the multi-ratemulticast congestion control protocol and the source sends data via cumulativelayers. These (MR1, MR2) receivers subscribe maximum 30 layers, MR3 subscribemaximum 25 layers, MR4 subscribe maximum 12 layers, MR5 subscribe maximum 6layers and MR6 subscribe maximum 3 layers. PLM encounters packet loss becausesometimes its detected available bandwidth is higher than real bandwidth, soPLM will add more layers than the network can handle and congested link may overflow,which will cause packets loss.

Results (in figure 4.11) show that ASMP has less packet loss ratio than PLM.

Chapter 5

CONCLUSION & FUTURE WORK

In this research activity,performance of single-rate multicast congestion control protocol (ASMP) and multi-ratemulticast congestion control protocol (PLM) was analyzed in terms of fourperformance metrics: TCP-friendliness, responsiveness, packet loss andthroughput. ASMP uses analytical model of TCP to measure TCP-friendly bandwidthshare and Congestion Indicator (CI) to detect congestion. PLM uses packet pair(pp) technique to estimate the available bandwidth and fairness scheduler (FS)is used to enforce fairness.

Since ASMP claims TCP-friendly behavior, high bandwidth utilization, andsmooth transmission rates, which are suitable for multimedia applications.While PLM claims TCP-friendliness, smoothness, fast convergence, high bandwidthutilization and responsiveness, we conduct simulation based performanceevaluation to compare them in terms of CP-friendliness, responsiveness, highbandwidth utilization and packet loss. Simulation results shows that PLMoutperforms ASMP in terms of TCP-friendliness, smoothness, responsiveness, fastconvergence and efficient in network utilization. Whereas ASMP performs betterin terms of packet loss and ASMP is slower than PLM in responding to networkcongestion. This is because

  • ASMP rate adaptation mechanism is not very quick and efficient, as compared to PLM rate adaptation mechanisms. So PLM provides smother throughput than ASMP.
  • Both protocols use the congestion detection mechanisms such as PLM uses packet pair (pp) mechanism. On the other hand, ASMP uses Congestion Indicator (CI) mechanism. Both mechanisms detect congestion before the network becomes severely congested. ASMP uses filter mechanism that filters the calculated rate in a dynamic way, based on statistical data of jitter delay measurements.
  • PLM is not using the filter mechanism to avoid noise packets. Sometime its calculated bandwidth is higher than the actual available bandwidth. So that is the cause of packet loss.
  • PLM provides fairness (i.e TCP-friendliness) because it uses fair queuing (FQ) at router. If fair queuing is not used at router, then PLM cannot maintain its TCP-friendliness and shows aggressive behavior. ASMP also ensure enough TCP-friendliness.
  • PLM ensures responsiveness. ASMP mechanisms, in contrast, are not enough responsive in varying network conditions.

There are only two techniques: single rate multicast and multiratemulticast, which were compared. A more extensive performance evaluation isrequired for different Adaptive Real-time Video Multicast techniques, not onlyin terms of video transmission types, but also network centric metrics andmultimedia quality based metrics. All the above areas are left for future work.