Chapter 1

INTRODUCTION

Background to Research

Due to the Innovative changes in telephony devices and related technologies world wide, the time has come to analysis the quality in telephone devices and provide improved versions of communication channels. Locally the implementation of telephony services is getting increased; many new organizations are setting up their resources to make this system and its facilities available to the users. The research in the telephone industries is in progress since last many years shown a great improvement in all over the world. Previously this telephony service used PSTN [3] which uses 54 kbps channel now after the improvement and change in the technology this telephonic service shifted to internet protocol. As Internet is a widely used medium for data receiving and transfer. Now this new technology becomes Voice over IP.

The concept of VoIP (Voice over Internet Protocol) [4] originated in about 1994, when hobbyists began to recognize the potential of sending voice data packets over the Internet rather than communicating through standard telephone service. This allows PC users to avoid long distance charges, and it was in 1994 that the first Internet Phone Software appeared. While contemporary VoIP uses a standard telephone hooked up to an Internet connection. Previous efforts in the history of VoIP required both callers to have a computer equipped with the same software, as well as a sound card and microphone. These early applications of VoIP were marked by poor sound quality and connectivity, but it was a sign that VoIP technology was useful and promising. The evolution of VoIP occurred in next few years, gradually reaching the point where some small companies were able to offer PC to phone service in about 1998. Phone to phone service soon followed, although it was often necessary to use a computer to establish the connection. Like many Internet applications in the late 1990's, early VoIP service relied on advertising sponsorship to subsidize costs, rather than by charging customers for calls. The gradual introduction of broadband Ethernet service allowed for greater call clarity and reduced latency, although calls were still often marred by static or difficulty making connections between the Internet and PSTN (public telephone networks). However, startup VoIP companies were able to offer free calling service to customers from special locations.

The breakthrough in VoIP history [9] came when hardware manufacturers such as Cisco Systems and Nortel started producing VoIP equipment that was capable of switching which means that functions that previously had been handled by a telephony service now implement in computer's CPU and will work as "switching" a voice data packet into something that could be read by the PSTN (and vice versa) could now be done by another device, thus making VoIP hard ware less computer dependent. Once hardware started becoming more affordable, larger companies were able to implement VoIP on their internal IP networks, and long distance providers even began routing some of the calls on their networks over the Internet. Usage of VoIP has expanded from the year 2000, dramatically. Different technical standards for VoIP data packet transfer and switching and each is supported by at least one major manufacturer - no clear "winner" has yet emerged to adopt the role of a universal standard. Whereas companies often switch to VoIP to save on both long distance and infrastructure costs, VoIP service has also been extended to residential users. In the Span of few years, VoIP has gone from being a fringe development to a mainstream alternative to standard telephone service.

At present there are two standards that are in use for VoIP switching and gateways: SIP and H.323. SIP [7] mainly relates to end-user IP Telephony applications, while H.323 is a new ITU standard for routing between the circuit-switched and packet-switched worlds used for termination of an IP originated call on the PSTN, but the converse is also becoming common at a very fast rate. As the technology getting advanced and many improvements have been implemented in making sure to maintain the quality of voice and data over the internet should be maintained. The main purpose of this thesis is to discuss the techniques to maintain the quality of VoIP and the role of protocols in VoIP which are H.323 and SIP

Area of Research

The area of research focuses on Study and Analysis of Quality Services in VoIP and the discussion of Role of H.323 and SIP [7] Protocols. Many techniques and mathematical models have been developed and implemented. As a matter of fact this thesis is not intended to provide any new model or strategy for improving Quality services in VoIP but to get the picture based on the standard matrix of measurement of QoS of VoIP like MOS [10].

Analysis of Quality Services of VoIP

Due to the emerging and advancements in the telecommunication making All-IP integrated communicating infrastructure capable to support applications and services with diverse needs and requirements. During the last few years a lot of attention is given to delivering voice traffic over both the public internet and corporate Intranets. IP Telephony, or VoIP, does not only provide more advanced services (example personalized call forwarding, instant messaging etc) than PSTN, but it also aims to achieve the same level of QoS and reliability [1],[2]. As opposed to PSTN, VoIP utilizes one common network for signaling and voice transport and thus enjoys several advantages with respect to the telephony services that are through All-IP networks infrastructures. The most important factors that influence the adoption of VoIP include improved network utilization by using advanced voice CODECS that compress the voice samples below 54 Kbps, possibilities to offer value added services(i.e. instant message, personalized call forwarding etc.) just to mention a few. In VoIP world many Quality impairments [34] introduced today by the Internet, it is important to provide mechanism in order to measure the level of quality that is actually provided today in the internet to interactive multimedia applications. That is, to measure how extensive are the loss, the delay and delay jitter impairments and how bad their impact on the perceived QoS, [3] is. There are a large number of methods proposed and some of them standardized which monitor the distorted signal and provide a rating that correlates well with voice quality. The most important parameters that affect the VoIP Quality are the following:

  • CODECS
  • Network Packet Loss
  • Jitter
  • Latency

Demonstration Methodology; Simulation

The OPNET Simulation is used during aforesaid research work [12] and is a very powerful network simulator. Main purposes are to optimize cost, performance and availability. The following tasks are considered:

  • Build and analyze models.
  • Configure the object palette with the needed models.
  • Set up application and profile configurations.
  • Model a LAN as a single node.
  • Specify background service utilization that changes over a time on a link.
  • Simulate multiple scenarios simultaneously.
  • Apply filter to graphs of results and analyze the results.

Role and Analysis of H.323 & SIP Protocols

Based on the research works that has been done so far, this part of the thesis will discuss and elaborate the H.323 and SIP [7] protocols and a comparative analysis of these two protocols based on their specification will discuss in detail in the next chapters

Results and Conclusions

The final conclusion of the simulation results will be shown and a comparative analysis of different CODECS with their performances from the simulated results and Role of H.323 and SIP protocols will be discussed.

Chapter 2

VoIP and Quality of Service

Introduction

In past traditional technology, telephone calls are carried through Public Switched Telephone Networks (PSTN), which provides high-quality voice transmission between two or more parties. Whereas the type of data such as email, web browsing etc. are carried over packet-based data networks like IP, ATM and Frame Relay. In the last few years, there has been a rapid shift towards using data networks to carry both the telephone calls and the data together. This so called convergence of voice and data networks is very appealing due to many considerations. VoIP systems digitize and transmit analog voice signals as a stream of packets over a digital data network.

VoIP technology insures proper reconstruction of voice signals, compensating for echoes due to the end-to-end delay, for jitter and for dropped packets and for signaling required for making telephone calls. The IP network used to support IP telephony can be a standard LAN, a network of leased facilities or the Internet. VoIP calls can be made or received using standard analog, digital and IP phones. VoIP gateways serve as a bridge between the PSTN and the IP network [9]. A call can be placed over the local PSTN network to the nearest gateway server, which moves it onto the Internet for transport to a gateway at the receiving end. With the use of VoIP gateways, computer-to-telephone calls, telephone-to-computer calls and telephone-to-telephone calls can be made with ease.

Access to a local VoIP gateway for originating calls can also be supported in a variety of ways. For example, a corporate PBX (Private Branch Exchange) can be configured so that all international direct dialed calls are transparently routed to the nearest gateway. High-cost calls are automatically supported by VoIP to obtain the lowest cost. To ensure interoperability between different VoIP manufacturers, VoIP equipment must follow agreed upon procedures for setting up and controlling the telephone calls. H.323 is one such family of standards that define various options for voice (and video) compression and call control for VoIP. Other calls setup and control protocols being utilized, and or being standardized include SIP, MGCP [27], and Megaco. IP Telephony goes beyond VoIP transport and defines several value added business and consumer applications for converged voice and data networks. Examples include Unified Messaging, Internet Call Center, Presence Management, Location Based Services etc.

During the last few years, the voice over data network services have gained increased popularity. Quick growth of the Internet Protocol (IP) based networks, especially the Internet, has directed a lot of interest towards Voice over IP (VoIP). The VoIP technology has been used in some cases, to replace traditional long-distance telephone technology, for reduced costs for the end-user. Naturally to make VoIP infrastructure and services commercially viable, the Quality of Service (QoS) needs to be at least close to the one provided by the Public Switched Telephone Network (PSTN). On the other side, VoIP associated technology will bring to the end user value added services that are currently not available in PSTN.

VoIP and QoS

In the networks of packet switching, the traffic engineering term is abbreviated as (QoS) or Quality of Service [3], [4], which refers to resource reservation control mechanisms instead of it, is to be understood as achieved service quality. Quality of Service (QoS). This Quality of services guarantees are important for the limited capacity network, for example in cellular data communication, especially for real-time streaming multimedia applications, for example voice over IP and IP-TV [4]. Quality of Service may or may not be agreed by Network or protocols and software and reserve capacity in the network nodes, for example during a session establishment phase. But in the entire the achieved level of performance, for example the data rate and delay, and priorities in the network nodes. The reserved capacity might be released during a tear down phase. Quality of Service does not supported by the Best Effort network Service. The ITU standard X.902 as defined the QoS quality requirements on the collective behavior.

The Quality of Service on all the aspects of a connection, such as guaranteed time to provide service, voice quality [3], echo, loss, reliability and so on. Grade of Service term, with many alternative definitions, rather than referring to the ability to reserve resources.

The convergence of communications and computer networks has led to a rapid growth in real-time applications, such as Internet Telephony or Voice over IP (VoIP). However, IP networks are not designed to support real-time applications and factors such as network delay, jitter and packet loss lead to deterioration in the perceived voice quality. In this chapter, brief background information about VoIP networks which is relevant to the thesis is summarized. The VoIP network, protocol and system structure along with the brief over view of the QoS of VoIP [4] are described in this chapter. Voice coding technology and main Codecs also discussed in the thesis (i.e. G.729, G.723.1)[8] are discussed. Network performance characteristics (e.g. packet loss and delay/delay variation) are also presented in next sections.

Problem

In past years when the Internet was first deployed, it lacked the ability to provide Quality of Service guarantees due to limits in router computing power. It is therefore run at default QoS level, or "best effort. The Technical Factors includes reliability, scalability, effectiveness, maintainability, Grade of Service, etc.

  • Dropped packets
  • Delay
  • Jitter
  • Out-of-order delivery
  • Error

QoS Mechanism

Quality of Service (QoS) [8] can be provided by generously over-provisioning a network so that interior links are considerably faster than access links. This approach is relatively simple, and may be economically feasible for broadband networks with predictable and light traffic loads. The performance is reasonable for many applications, particularly those capable of tolerating high jitter, such as deeply-buffered video downloads.

Commercially involved VoIP services are often competitive with traditional telephone service in terms of call quality even though QoS mechanisms are usually not in use on the user's connection to his ISP and the VoIP provider's connection to a different ISP. In high load conditions, however, VoIP quality degrades to cell-phone quality or worse. The mathematics of packet traffic indicates that a network with QoS can handle four times as many calls with tight jitter requirements as one without QoS. The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their traffic demands. As the Internet now services close to a billion users, there is little possibility that over-provisioning can eliminate the need for QoS when VoIP [8] becomes more commonplace. For narrowband networks more typical of enterprises and local governments, however, the costs of bandwidth can be substantial and over provisioning is hard to justify. In these situations, two distinctly different philosophies were developed to engineer preferential treatment for packets which require it.

Early work used the "IntServ" philosophy of reserving network resources. In this model, applications used the Resource reservation protocol (RSVP) to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the notion of designing networks so that Core routers do little more than simply switch packets at the highest possible rates.

The second and currently accepted approach is "DiffServ" or differentiated services. In the DiffServ model, packets are marked according to the type of service they need. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. (At the IP layer, differentiated services code point (DSCP) markings use the 5 bits in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q and IEEE 802.1D can be used to carry essentially the same information). Routers supporting DiffServ use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue.

VoIP Networks

VoIP Networks Connections

Common VoIP network connections normally include the connection from phone to phone, phone to PC (IP Terminal or H.323/SIP Terminal [25]) or PC to PC, as shown in Figure 2.1. The Switched Communication Network (SCN) can be a wired or wireless network, such as PSTN, ISDN or GSM.

Perceived QoS or User-perceived QoS is defined as end-to-end or mouth to ear, as the Quality perceived by the end user. It depends on the quality of the gateway (G/W) or H.323/SIP terminal and IP network performance. The latter is normally referred to as Network QoS, as illustrated in Figure 2.1. As IP network is based on the "best effort" principle which means that the network makes no guarantees about packet loss rates, delays and jitter, the perceived voice quality will suffer from these impairments (e.g. loss, jitter and delay).

There are currently two approaches to enhance QoS for VoIP applications. The first approach relies on application-level QoS mechanisms as discussed previously to improve perceived QoS without making changes to the network infrastructure. For example, different compensation strategies for packet loss (e.g. Forward Error Correction (FEC)) and jitter have been proposed to improve speech quality even under poor network conditions. The second approach relies on the network-level QoS mechanism and the emphasis is on how to guarantee IP Network performance in order to achieve the required Network QoS. For example, IETF is working on two QoS frameworks, namely DiffServ (the Differentiated Services) and IntServ (the Integrated Services) to support QoS in the Internet. IntServ uses the per-flow approach to provide guarantees to individual streams and is classified as a flow-based resource reservation mechanism where packets are classified and scheduled according to their flow affiliation. DiffServ provides aggregate assurances for a group of applications and is classified as a packet-oriented classification mechanism for different QoS classes. Each packet is classified individually based on its priority.

VoIP Protocol Architecture

Voice over IP (VoIP) is the transmission of voice over network using the Internet Protocol. Here, we introduce briefly the VoIP protocol architecture, which is illustrated in Figure 2.2. The Protocols that provide basic transport (RTP [3]), call-setup signaling (H.323 [7], SIP [8]) and QoS feedback (RTCP [4]) are shown.

VoIP System Architecture

Figure 2.3 shows a basic VoIP system (signaling part is not included), which consists of three parts - the sender, the IP networks and the receiver [13]. At the sender, the voice stream from the voice source is first digitized and compressed by the encoder. Then, several coded speech frames are packetized to form the payload part of a packet (e.g. RTP packet). The headers (e.g. IP/UDP/RTP) are added to the payload and form a packet which is sent to IP networks. The packet may suffer different network impairments (e.g. packet loss, delay and jitter) in IP networks. At the receiver, the packet headers are stripped off and speech frames are extracted from the payload by depacketizer. Play out buffer is used to compensate for network jitter at the cost of further delay (buffer delay) and loss (late arrival loss). The de-jittered speech frames are decoded to recover speech with lost frames concealed (e.g. using interpolation) from previous received speech frames.

Chapter 3

Analysis of QoS Parameters

Introduction

A Number of QoS [11] of parameters can be measured and monitored to determine whether a service level offered or received is being achieved. These parameters consist of the following

  • Network availability
  • Bandwidth
  • Delay
  • Jitter
  • Loss

Network Availability

Network availability can have a significant effect on QoS. Simply put, if the network is unavailable, even during brief periods of time, the user or application may achieve unpredictable or undesirable performance (QoS) [11]. Network availability is the summation of the availability of many items that are used to create a network. These include network device redundancy, e.g. redundant interfaces, processor cards or power supplies in routers and switches, resilient networking protocols, multiple physical connections, e.g. fiber or copper, backup power sources etc. Network operators can increase their networks availability by implementing varying degrees of each item.

Bandwidth

Bandwidth is probably the second most significant parameters that affect QoS. Its allocation can be subdivided in two types

  • Available bandwidth
  • Guaranteed bandwidth

Available bandwidth

Many Networks operators oversubscribe the bandwidth on their network to maximize the return on investment of their network infrastructure or leased bandwidth. Oversubscribing bandwidth means the BW a user is subscribed to be no always available to them. This allows users to compete for available BW. They get more or less BW depending upon the amount of traffic form other users on the network at any given time. Available bandwidth is a technique commonly used over consumer ADSL networks, e.g., a customer signs up for a 384-kbps service that provides no QoS (BW) guarantee in the SLA. The SLA points out that the 384-kbps is typical but does not make any guarantees. Under lightly loaded conditions, the user may achieve 384-kbps but upon network loading, this BW will not be achieved consistently. This is most noticeable during certain times of the day when more users access the network.

Guaranteed bandwidth

Network operators offer a service that provides minimum BW and burst BW in the SLA. Because the BW is guaranteed the service is prices higher than the available BW service. The network operator must ensure that those who subscribe to this guaranteed BW service get preferential treatment (QoS BW guarantee) [24][25] over the available BW subscribers. In some cases, the network operator separates the subscribers by different physical or logical networks, e.g., VLANs, Virtual Circuits, etc. In some cases, the guaranteed BW service traffic may share the same network infrastructure with available BW service traffic. This is often the case at location where network connections are expensive or the bandwidth is leased from another service provider. When subscribers share the same network infrastructure, the network operators must prioritize the guaranteed the BW subscribers traffic over the available BW subscribers' traffic so that in times of networks congestion the guaranteed BW subscribers SLAs are met. Burst BW can be specified in terms of amount and duration of excess BW (burst) above the guaranteed minimum. QoS mechanism may be activated to discard traffic that use consistently above the guaranteed minimum BW that the subscriber agreed to in the SLA.

Delay

Network delay is the transit time an application experiences from the ingress point to the egress point of the network. Delay can cause significant QoS issues with application such as SNA and fax transmission that simply time-out and final under excessive delay conditions. Some applications can compensate for small amounts of delay but once a certain amount is exceeded, the QoS becomes compromised.

For example some networking equipment can spoof an SNA session on a host by providing local acknowledgements when the network delay would cause the SNA session to time out. Similarly, VoIP gateways and phones provide some local buffering to compensate for network delay. Finally delay can be both fixed and variables. Examples of fixed delay are:

Application based delay, e.g., voice codec processing time and IP packet creation time by the TCP/IP software stack [32] [38].

Data transmission (queuing delay) over the physical network media at each network hop. Propagation delay across the network based on transmission distance

Examples of variable delays are:

  • Ingress queuing delay for traffic entering a network node
  • Contention with other traffic at each network node
  • Egress queuing delay for traffic exiting a network node

Jitter

Jitter is the measure of delay variation between consecutive packets for a given traffic flow. Jitter has a pronounced effect on real time delay sensitive applications such as voice and video. These real time applications expect to receive packets at a fairly constant rate with fixed delay between consecutive packets. As the arrival rates increases, the jitter impacts the applications performance [22] [27]. A minimal amount of jitter may be acceptable, but as jitter increases the application may become unusable. Some applications, such as voice gateways and IP phones, [35] can compensate for small amounts of jitter. Since a voice application requires the audio to play out at constant rate, in the next packet time, the application will replay the previous voice packets until the next voice packet arrives. However if the next packet is delayed too long it is simply discarded when it arrives resulting in a small amount of distorted audio. All networks introduce some jitter because of variability in delay introduced by each network node as packets are queues. However as long as the jitter is bounded, QoS can be maintained.

Loss

Loss can occur due to errors introduced by the physical transmission medium. For example, most landline connections have very low loss as measured in the Bit Error Rate. However, wireless connections such as satellite, mobiles or fixed wireless networks have a high BER that varies due to environment or geographical conditions such as fog, rain, and RF interference, cell handoff during roaming and physical obstacles such as trees, building and mountain [2][4][25]. Wireless technologies often transmit redundant information since packets will inherently get dropped some of the time due to the nature of the transmission medium.

Loss can also occur when congested network nodes drop packets. Some networking protocols such as TCP (Transmission Control Protocol) offer packets loss protection by retransmitting packets that may have been dropped or corrupted by the network. When a network becomes increasingly congested, more packets are dropped and hence more TCP transmission. If congestion continues the network performance will significantly decrease because much of the BW is being used to retransmit dropped packets. TCP will eventually reduce its transmission window size, resulting in smaller packets being transmitted; this eventually will reduce congestion, resulting in fewer packets being dropped. Because congestion has a direct impact on packet loss, congestion avoidance mechanism is often deployed. One such mechanism is called Random EARLY Discard (RED). RED algorithms randomly and intentionally drop packets once the traffic reaches one or more configured threshold. RED takes advantage of the TCP protocol's window size throttle feature and provides more efficient congestion management for TCP-based flows. Note that RED only provides effective congestion control for application or protocols with TCP like throttling mechanism

Emission priorities

Determine the order in which traffic is forwarded as it exits a network node. Traffic with higher emission priority is forwarded a head of traffic with a lower emission priority. Emission priorities also determine the amount of latency introduced to the traffic by the network node's queuing mechanism. For example, delay-tolerant application such as email would be configured to have a lower emission priority than delay sensitive real time applications such as voice or video. These delay tolerant applications may be buffered while the delay sensitive applications are being transmitted.

In its simplest of forms, emission priorities use a simple transmit priority scheme whereby higher emission priority traffic is always forwarded ahead of lower emission priority traffic. This is typically accomplished using strict priority scheduling (queuing) the downside of this approach is that low emission priority queues may never get services (starved) it there is always higher emission priority traffic with no BW rate limiting.

A more elaborate scheme provides a weighted scheduling approach to the transmission of the traffic to improve fairness, i.e., the lower emission priority traffic is transmitted. Finally, some emission priority schemes provide a mixture of both priority and weighted schedulers.

Discarded priorities

Are used to determine the order in which traffic gets discarded. The traffic may get dropped due to network node congestion or when the traffic is out of profile, i.e., the traffic exceeds its prescribed amount of BW for some period of time. Under congestion, traffic with a higher discard priority gets dropped before traffic with a lower discard priority. Traffic with similar QoS performance can be sub divided using discard priorities. This allows the traffic to receive the same performance when the network node is not congested. However, when the network node is congested, the discard priority is used to drop the more eligible traffic first. Discard priorities also allow traffic with the same emission priority to be discarded when the traffic is out of profile. With out discard priorities traffic would need to be separated into different queues in a network node to provide service differentiation. This can be expensive since only a limited number of hardware queues (typically eight or less) are available on networking devices. Some devices may have software based queues but as these are increasingly used, network node performance is typically reduced.

With discard priorities, traffic can be placed in the same queue but in effect the queue is sub divided into virtual queues, each with a different discard priority. For example if a product supports three discard priorities, then one hardware queues in effect provides three QoS Levels.

Table 3.1 illustrates the QoS performance dimensions required by some common applications. Applications can have very different QoS requirements. As these are mixed over a common IP transport network, without applying QoS the network traffic will experience unpredictable behavior [22][25].

Categorizing Applications

Networked applications can be categorized based on end user expectations or application requirements. Some applications are between people while other applications are a person and a networked device application, e.g., a PC and web server. Finally, some networking devices, e.g., router-to-router. Table 3.2 categorizes applications into four different traffic categories:

  • Interactive
  • Responsive
  • Timely
  • Network Control

Interactive applications

Some applications are interactive whereby two or more people actively participate. The participants expect the networked applications to respond in real time. In this context real time means that there is minimal delay (latency) and delay variations (jitter) between the sender and receiver. Some interactive applications, such as a telephone call, have operated in real time over the telephone companies circuit switched networks for over 100 years. The QoS expectations for voice applications have been set and there fore must also be achieved for packetized voice such as VoIP.

Other interactive applications include video conferencing and interactive gaming. Since the interactive applications operate in real time, packet loss must be minimized. Interactive applications typically are UDP based (Universal Datagram Protocol) and hence cannot retransmit lost or dropped packets as with TCP based applications. However packets retransmission would not be beneficial because interactive applications are time based. For example if a voice packet was lost. It doesn't make sense for the sender to retransmit it because the conservations have already progressed and the lost packet might be from part of the conversation that has already passed in time. [4][5]

Responsive applications

Some applications are between a person and networked device's applications to be responsive so a request sent to the networking device requires a relatively quick response back to the sender. These applications are sometimes referred to as being near real time. These applications require relatively low packet delay, jitter and loss. However QoS requirements for the responsive applications are not as stringent as real time, interactive application requirements. This category includes streaming media and client server web based applications. Streaming media application includes Internet radio and audio / video broadcasts (news, training, education and motion pictures). Streaming applications require the network to be responsive when they are initiated so the user doesn't wait too long before the media begins playing [20]. These applications also require the network to be responsive for certain types of signaling. For example with movie on demand when one changes channels or forward, rewinds or pause the media one expects the application to react similarly to the response time of there remote control. Client / server web applications typically involve the user selecting a hyperlink to jump to a new page or submit a request etc. These applications also require the network to be responsive such that once the hyperlink to be responsive such that once the hyperlink is selected, a response. With broadband Internet connections, this often achieved over a best effort network, albeit inconsistently [34] [37]. These types of application may include a financial transaction, e.g., place credit card order and quickly provide feedback to the user indicating the transaction has completed. Otherwise the user may be unsure to initiate a duplicate order. Alternatively the user may assume that the order was placed correctly but it may not have. In either case the user will be dissatisfied with the network or application's performance.

Responsive applications can use either UDP or TCP based transport. Streaming media applications typically use UDP. Web based applications are based on the hypertext transport protocol and always use TCP, for web based application packet loss is managed by TCP which retransmit lost packets. Retransmission of lost streaming media is sufficiently buffered. If not then the lost packets are discarded. Resulting in some distortion in the media

Timely applications

Some applications between a person and networked device's application do not require near real time performance but do require the information to be delivered in a timely manner. Such example includes store and forward email applications and file transfer. The relative importance of these applications is based on their business priorities. These applications require that packets arrive with abounded amount of delay. For example, if an email takes a few minutes to arrive at its destination, this is acceptable. However in a business environment, if an email took 10 minutes to arrive at its destination, this is often unacceptable. The same bounded delay applies to file transfer. Once a file transfer is initiated, delay and jitter are inconsequential because file transfer often take minutes to complete. Note that timely applications use TCP based transport and therefore packet loss is managed by TCP which retransmit any lost packets resulting in no packet loss [4] [7].

In summary, timely applications expect the network QoS to provide packets with a bounded amount of delay. Jitter has a negligible effect on these types of applications. Loss is reduced to zero due to TCP's recovery mechanism.

Network control applications

Some applications are used to control the operations and administration of the network. Such application include network routing protocols, billing applications and QoS monitoring and measuring for SLAs [16]. These applications can be subdivided into those required for critical and standard network operating conditions. To create high availability networks, network control applications require priority over end user applications because if the network is not operating properly, end user application performance will suffer.

QoS Management Architecture

QoS management architecture of VoIP can be partitioned into two planes: data plane and control plane [4]. Mechanisms in data plane include packet classification, shaping, policing, buffer management, scheduling, loss recovery, and error concealment [25] [26]. They implement the actions the network needs to take on user packets, in order to enforce different class services. Mechanisms in control plane consist of resource provisioning, traffic engineering, admission control, resource reservation and connection management etc.

Data Plane

Packet Forwarding

It consists of Classifier, Meter, Marker, Shaper / Dropper. When a packet is received, a packet classifier determines which flow or class it belongs to.

All packets belonging to the same flow/class obey a predefined rule and are processed in a similar manner. For VoIP applications, the basic criteria of classification could be IP address, TCP/UDP port, protocol, input port, IP precedence, DiffServ code points (DSCP), or Ethernet 802.1p class of service (CoS). Cisco supports several additional criteria such as access list and traffic profile. The meter is to decide whether the packet is in traffic profile or not. The Shaper/Dropper delays or drops the packets which crossed the limits of traffic profile to bring in compliance to current network load. A marker marks the certain field in the packet, such as DS field, to label the packet type for differential treatment later. After the traffic conditioner, a buffer is used to store packets that wait for transmission.

Buffer Management and Scheduling

Active queue management (RED) which drops packets before a queue becomes full can avoid the problem of unfair resource usage. Predictable queuing delay and bandwidth sharing can be achieved by putting the flows into different queues and treating individually [6]. Schedulers of this type are not scalable as overhead increases as the number of on-going traffic increases. Solution is class-based schedulers such as Static Priority and Constraint Based WFQ which schedule traffic in a class-basis fashion. But, it is difficult for the individual flow to get the predictable delay and bandwidth sharing. So care should be taken to apply this to voice application which has strict delay requirements.

Loss Recovery

It can be classified into Active and Passive recovery. In Active recovery we have retransmission and in passive recovery we have Forward Error Correction (Adding redundancy). Retransmission increases the latency of packets and may not be suitable for VoIP.

Control Plane

Resource provisioning and Traffic Engineering

Refers to the configuration of resources for applications in the network. In industry, main approach of resource provisioning is over provisioning, abundantly providing resources [10]. Factors that make this attractive are cost of bandwidth in the backbone is decreasing, network planning is becoming simpler and hence provision can be planned.

Traffic Engineering

It mainly focuses on minimizing over-utilization of a particular portion of the network while the capacity is available elsewhere in the network. Multi-Protocol Label Switching (MPLS) and Constraint Based Routing (CBR) provide powerful tools for traffic engineering. With these mechanisms, a certain amount of network resources can be reserved for the potential voice traffic along the paths which are determined by CBR or shortest path routing algorithms.

Admission Control

Limits resource usage of voice traffic within the amount of the provisioned resources. IP networks have no admission control and can offer only best effort service. Parameter based Admission Control provides delay guaranteed service to applications which can be accurately described, such as VoIP. When traffic is bursty, it is difficult to describe traffic characteristics which makes this type to overbook network resources and hence lowers network utilization. It uses explicit traffic descriptors to limit the amount of traffic over any period (typical example is token bucket). Algorithms used in parameter based admission control are [12] [14]:

Cisco's resource reservation based (RSVP), Utilization based (compares with a threshold, based on utilization value at runtime it decides to admit or reject), Per-flow end-to-end guaranteed delay service (Computes bandwidth requirements and compares with available resource to make decision.), Class-based admission control.

Performance Evaluation in VoIP applications

End-To-End Delay

When it exceeds a certain value, the interactive ness becomes more like a half-duplex communication [17]. Delay can be of two types: Delays due to processing and transmission of speech and Network delay (delay that is the result of processing in end systems, packet processing in network devices and propagation delay between network nodes on the transmission path).

Network delay = Fixed part + variable part

Fixed part depends on performance of the network nodes on the transmission path, the capacity of links between the nodes, transmission delay and propagation delay. Variable part is the time spent in the queues which depend on current network load. Queuing delay can be reduced by introduction of advanced scheduling mechanisms (Expedited Forwarding and priority queuing). IP packet delay can be reduced by sending shorter packets. Useful technique for voice delay reduction on WAN is link fragmentation and interleaving. Here a longer packet is fragmented into smaller packets and transmitted. In between those small packets, VOICE packets are sent.

Delay Jitter

Delay variation, also called jitter, obstructs the proper reconstruction of voice packets in their original sequential and periodical pattern. It is defined as difference in total end-to-end delay of two consecutive packets in the flow. Removing the jitter requires collecting packets and storing them long enough to allow the slowest packets to arrive in order to be played in the correct sequence [20].

Solution is to employ a play out buffer at the receiver to absorb the jitter before outputting the audio stream. Packets are buffered until their scheduled play out time arrives. Scheduling a later deadline increases the possibility of playing out more packets and results in lower loss rate, but at the cost of higher buffering delay.

Techniques for Jitter Absorption

  • Setting the same play out time for all the packets for entire session or for the duration of each session.
  • Adaptive adjusting of play out time during silence periods regarding to current network
  • Constantly adapting the play out time for each packet, this requires the scaling of voice packets to maintain continued play out.

Frame Eraser (F.E)

It basically happens when the IP packet carrying speech frame does not arrive to the receiver in time. Loss may be single frame or a block of frames. Techniques used to fight the frame erasure [24]

  • Forward Error Correction (requires additional processing) depends on the rate and distribution of the losses.
  • Loss concealment (replaces lost frames by playing the last successfully received frame) effective only at low loss rate of a single frame.

High F.E and delays can lead to a longer period of corrupt voice. The speech quality perception by the listener is based on F.E levels that occur on the exit from the jitter buffer after the Forward Error Correction has been employed. To reduce levels of frame loss, Assured forwarding service helps to reduce network packet loss that occur because of full queues in network nodes.

Out of Order Packet Delivery

Occurs in the complex topology where more than one path exists between the sender and the receiver. The receiving system, must rearrange received packets in the correct order to reconstruct the original speech signal [14].

Techniques for OUT-OF-ORDER PACKET DELIVERY

It is also done by Jitter buffer whose functionality now became

Re-ordering of out of order packets ( based on sequence number)

Elimination of Jitter.

Chapter 4

Demonstration Methodology and Simulation

Introduction of OPNET Software

The OPNET is a very powerful network simulator. Main purposes are to optimize cost, performance and availability. The goal of this research is to learn the basics of how to use Modeler interface, as well as some basic modeling theory. The following tasks are considered [38]:

  • Build and analyze models.
  • Configure the object palette with the needed models.
  • Set up application and profile configurations.
  • Model a LAN as a single node.
  • Specify background utilization that changes over a time on a link.
  • Simulate multiple scenarios simultaneously.
  • Apply filter to graphs of results and analyze the results.

Features of OPNET Software

  • Project Editor
  • The Process Model Editor
  • The Link Model Editor
  • The Path Editor
  • The Packet Format Editor
  • The Probe Editor
  • The Simulation Sequence Editor
  • The Analysis Tool
  • The Project Editor Workspace

Objectives

The objective of this Demonstration is to illustrate QoS Specific technologies for Internet Protocol and General understanding of Voice Application and possible statistics that can be collected [3].

IP QoS Specific techniques for Internet Protocol

Queuing schemes:

  • First In First Out (FIFO),
  • Priority Queuing (PQ),
  • Custom Queuing (CQ), and
  • Weighted Fair Queuing (WFQ).

Random Early Detection (RED) and Weighted RED (WRED) mechanisms

Committed Access Rate mechanism (CAR),

The above mentioned techniques give the concept and benefits of using them to prevent bottlenecks. Queuing schemes provide predictable network service by providing dedicated bandwidth, controlled jitter and latency, and improved packet loss characteristics. The basic idea is to pre-allocate resources (e.g., processor and buffer space) for sensitive data. Each of the following schemes require customized configuration of output interface queues.

Priority Queuing (PQ) assures that during congestion the highest priority data does not get delayed by lower priority traffic. However, lower priority traffic can experience significant delays. (PQ is designed for environments that focus on mission critical data, excluding or delaying less critical traffic during periods of congestion.)

Custom Queuing (CQ) assigns a certain percentage of the bandwidth to each queue to assure predictable throughput for other queues. It is designed for environments that need to guarantee a minimal level of service to all traffic.

Weighted Fair Queuing (WFQ) allocates a percentage of the output bandwidth equal to the relative weight of each traffic class during periods of congestion.

RED is a dropping mechanism based upon the premise that adaptive flows such as TCP will back off and retransmits if they detect congestion. By monitoring the average queue depth in the router and by dropping packets, RED aims to prevent the ramp up of too many TCP sources at once and minimize the effect of that congestion.

CAR is a traffic regulation mechanism, which defines a traffic contract in routed networks. CAR can classify and set policies for handling traffic that exceeds a certain bandwidth allocation [14] [15]. CAR can be also used to set IP precedence based on application, incoming interface and TOS. It allows considerable flexibility for precedence assignment.

Configuration

To apply and simulate the results from the above mentioned techniques QoS Configuration is performed on the Following Objects

Table 4.1(a): Configurable Items Description

Configuration Description

Router Configuration.

QoS specification parameters are available on a per interface basis on every router. The sub-attribute called "QoS info" in the "IP Address Information" attribute is used to specify this information.

An incoming and outgoing CAR profile can be assigned to this interface as well as a queuing mechanism (FIFO, WFQ, PQ, and CQ) with its queuing profile.

"Queuing Profiles" are special schemes defining different queue configuration options. These are defined on the "QoS Attribute Configuration" object. Check out that object for a list of the default profiles

Traffic Specification

Setting the IP precedence in the TOS (Type of Service) or DSCP (Differentiated Service Code point) field of IP data grams. Traffic is prioritized on session basis.

In clients the attribute "Application Configuration" defines for each application (Email, Ftp, Http ...) the type of service.

In servers the attribute "Supported Services" defines for each application the type of service as well.

QoS Configuration Objects

The QoS Attribute Configuration object defines profiles for the following technologies:

  • CAR
  • FIFO
  • WFQ
  • Custom Queuing
  • Priority Queuing

Each queuing-based profile (e.g., FIFO, WFQ, PQ, and CQ) contains a table in which each row represents one queue. Each queue has many parameters such as queue size, classification scheme, RED parameters, etc.

Note that the classification scheme can also be configured to contain many different criteria by increasing the number of rows.

Some examples of setting queue priorities are:

  • Weight for WFQ profile. Higher priority is assigned to the queue with a higher weight.
  • Byte count for Custom Queuing profile. More traffic is served from the queue with a higher byte count.
  • Priority label for Priority Queuing. Higher priority is assigned to the queue with a higher priority label.

The CAR profile defines a set of classes of service (COS). Each row represents a COS for which CAR policies has been defined.

Scenario Test

To Elaborate the Simulation using this Software, this includes given scenarios. The role of each of the scenario is as follows [24] [26]:

  1. FIFO: This scenario illustrates FIFO queuing at the IP layer.
  2. Priority Queuing: This scenario illustrates Priority Queuing at the IP layer.
  3. Custom Queuing: This scenario illustrates Custom Queuing at the IP layer.
  4. Custom_Queuing_with_LLQ: This scenario illustrates the impact of using a low latency queue in Custom Queuing.

Results will be based on the Following Statistics

Global Statistics: These statistics get updated by all the nodes/links in the network.

  • Links throughput
  • Application response time
  • Application traffic sent and received

Node Statistics: Statistics collected by all the nodes in the network.

  • Buffer size (packets)
  • Size of a queue (annotated by "Q") on an interface (annotated by "IF")
  • Queuing delay (secs)
  • Delay experienced by a packet from the time it arrives in the queue until it leaves it
  • Traffic received (bits/sec or packets/sec)
  • Amount of traffic received by a queue (annotated by "Q") on an interface (annotated by "IF")
  • Traffic sent (bits/sec or packets/sec)
  • Amount of traffic sent from a queue (annotated by "Q") on an interface (annotated by "IF")
  • Traffic dropped (bits/sec or packets/sec)
  • Amount of traffic dropped by a queue (annotated by "Q") on an interface (annotated by "IF"). Traffic can be dropped because of the queue size or RED/WRED
  • RED average Queue Size (packets)
  • Average queue size of a queue (annotated by "Q") on an interface (annotated by "IF"). This size is computed for RED.
  • CAR incoming traffic dropped (bits/sec or packets/sec). Amount of incoming traffic dropped by CAR policies on an interface (annotated by "IF").
  • CAR outgoing traffic dropped (bits/sec or packets/sec). Amount of outgoing traffic dropped by CAR policies on an interface (annotated by "IF").

Scenario 1: Network configuration (No QoS)

The network is composed of four pairs of video clients. Each pair uses a distinct TOS (Type of Service) for data transfer. The link between the two routers is a "potential" bottleneck.

Results: Traffic is queued in "router A" because of the bottleneck. Since "router A" has unlimited buffer capacity no packets get dropped. The application response time keeps on increasing as the packets get queued indefinitely without ever getting dropped.

Scenario 2: Network configuration (FIFO)

The network is composed of four pairs of video clients. Each pair uses a distinct TOS (Type of Service) for data transfer. The link between the two routers is a "potential" bottleneck. FIFO queuing can be enabled on each interface in "advanced" routers. Queuing profile and queuing processing mechanism are set in attribute "QoS info" in "IP Address Information" compound attribute. Queuing profile defines the number of queues and the classification scheme. Queuing profiles are defined in the QoS configuration object.

Results: Traffic is queued in "router A" because of the bottleneck. Since "router A" has limited buffer capacity, some packets are dropped when the buffer usage reaches its full capacity. The application response time can be seen to reach a threshold because packets that arrive on a full queue always get dropped. Note that the maximum delay that an arriving packet observes is the delay encountered as a result of servicing all the packets ahead of it in an almost full queue.

Scenario 3: Network configuration (Priority Queuing)

The network is composed of four pairs of video clients. Each pair uses a distinct TOS (Type of Service) for data transfer. The link between the two routers is a "potential" bottleneck. Routers support multiple queues for each type of service. Queue 4 receives TOS 4 traffic; queue 3 receives TOS 3 traffic... Queues are serviced using "Priority Queuing" mechanism. Priority queuing can be enabled on each interface in "advanced" routers. Queuing profile and queuing processing mechanism are set in attribute "QoS info" in "IP Address Information" compound attribute. Queuing profile defines the number of queues and the classification scheme. Queuing profiles are defined in the QoS configuration object.

Result: Traffic is queued in "router A" because of the bottleneck. Priority queuing mechanism differentiates between queues according to its priority. In this example, priority is based on type of service (TOS).

  • Queue 4 sends packets as long it is not empty.
  • Queue 3 sends packets when queue 4 is empty.
  • Queue 2 sends packets when queue 3 and 4 are empty.
  • Queue 1 sends packets when all the other queues are empty.

As a result of this classification traffic with higher TOS gets better delay.

Scenario 4: Network configuration (Custom Queuing)

The network is composed of four pairs of video clients. Each pair uses a distinct TOS (Type of Service) for data transfer. The link between the two routers is a "potential" bottleneck. Routers support multiple queues for each type of service. Queue 4 receives TOS 4 traffic; queue 3 receives TOS 3 traffic... Queues are serviced using "Custom Queuing" mechanism.

Custom queuing can be enabled on each interface in "advanced" routers. Queuing profile and queuing processing mechanism are set in attribute "QoS info" in "IP Address Information" compound attribute. Queuing profile defines the number of queues and the classification scheme. Queuing profiles are defined in the QoS configuration object. This object is found in "utilities" palette.

Results: Traffic is queued in "router A" because of the bottleneck. In this example, Custom Queuing mechanism differentiates traffic between queues based on the type of service (TOS). Traffic is sent from each queue in a round-robin fashion.

Queues send traffic proportionally to their byte count. In this example, queues with high index have higher byte count.

As a result of this classification traffic with higher TOS gets better delay. Queue 3 and 4 get their share but let other queues starving of bandwidth.

Scenario 4: Network configuration (Custom Queuing with Low Latency Queuing)

The network configuration is similar to that of the previous scenario (Custom Queuing). The only difference is in the Custom Queuing profile details settings where Queue 1 is configured to be a Low Latency Queue (LLQ). The LLQ is a strict priority queue functioning within the regular Custom Queuing scheduling environment. It receives absolute precedence over the other queues which mean that no other queue in the system can be serviced unless the LLQ is empty. The "Byte Count" attribute is not used for the LLQ and its value gets ignored by the scheduler. If the LLQ is empty, other queues are serviced according to the regular Custom Queuing mechanism based on their "Byte Count" attribute settings.

Results: Traffic is queued in "router A" because of the bottleneck. Queue 1, which is configured to be a LLQ, gets the highest priority and thus the highest share of the bandwidth and the lowest end-to-end delay. Other queues get starved due to the presence of the LLQ (compare the results with the previous scenario).

General understanding of Voice Application

The role of each of the scenario is as follows:

multi_caller_node: Demonstrate the effect of various encoding schemes with Speech Activity Factor enabled and disabled on the load and throughput for a specific caller.

Voice_with_WFQ : This scenario illustrates two nodes contending to send traffic through a common link. The HI_Priority node has the Type of Service (ToS) field set to Interactive Voice and the LOW_Priotity node has the ToS field set to Excellent Effort. Router1 and Router2 prioritize the traffic using the WFQ settings configured in the "QoS Attribute Config" node.

Speech Activity Detection: This scenario examines the link capacity while having Speech Activity Detection enabled and disabled, in the voice application

SIP : This scenario uses SIP (Session Initiation Protocol) to manage the Call Setup and Call Disconnect signaling.

Configuration

To apply and simulate the results from the above mentioned features Configuration is performed on the Following Objects

Configurable Items

  • Encoder Name
  • Frame Size
  • Look ahead Size
  • DSP Processing Ratio
  • Coding Rate
  • Speech Activity Detection

A call being initiated by a node, uses all those

Parameters defined in the "Outgoing XXX" attribute. It also sends the "Incoming XXX" attributes to the node that is being called.

Workstation Configuration

Call Generation Parameters

Silence Length for both incoming and outgoing connection, Talk Spurt Length for both incoming and outgoing connection, Destination Address

Encoder Scheme for both incoming and outgoing

Connection , Voice Frames per Packet

Type of Service

A call being initiated by a node, uses all those

Parameters defined in the "Outgoing XXX" attribute. It also sends the "Incoming XXX" attributes to the node that is being called.

Node Capabilities and Limitations: Workstations or LAN nodes can be used to generate Voice Traffic.

A wkstn / LAN node with voice application can connect to other wkstn / LAN nodes. There are no server nodes that service voice application.

Application-Config Node: This node can be found in the "utilities" object palette.

This node should be placed in the network and should not be connected to any other node in the network.

This node maintains Application Layer related parameters that can be used by all the nodes in the network. This node avoids duplication of parameters in multiple nodes.

Results will be based on the Following Statistics

Global Statistics: Statistics collected by all the nodes in the network.

Traffic sent in packets/sec and bytes/sec: Traffic Sent by Voice Application to the transport layer, by all the nodes in the network.

Traffic Received in packets/sec and bytes/sec: Traffic Received by Voice Application from the transport layer, in all the nodes in the network.

Packet End-to-End Delay: The delay incurred by voice application packets, while going from a calling party to called party and vice-versa.

Packet Delay Variation: The delay variation incurred by voice application packets while going from a calling party to called party and vice-versa.

Node Statistics:

Traffic sent in packets/sec and bytes/sec: Traffic Sent by Voice Application to the transport layer.

Traffic Received in packets/sec and bytes/sec: Traffic Received by Voice Application from the transport layer.

Packet End-to-End Delay: The delay incurred by voice application packets, while going from a calling party to called party and vice-versa.

Packet Delay Variation: The delay variation incurred by voice application packets while going from a calling party to called party and vice-versa.

Scenario 1: Voice over IP

Voice Traffic is very sensitive to Delay and Delay Variation in IP Networks. IP Traffic is treated as "best-effort", meaning that IP Traffic is treated on a first-come first-serve basis. Data packets being of variable sizes, large file transfers can take advantage of large packet sizes. These and many other factors contribute to large delays and large delay variations in packet delivery.

Weighted Fair Queuing (WFQ) or priority queuing allows the network to put different types into specific queues and treat them separately. This mechanism can be used to prioritize the transmittal of voice traffic over data traffic and hence reduce potential queuing delays.

This scenario illustrates two nodes contending to send traffic through a common link. The statistics were obtained in two simulation runs. In the first run, both the HI_Priority_src node and the he LOW_Priotity_src node have the Type of Service (ToS) fields set to Best_Effort.In the second run, the HI_Priority_src node has the Type of Service (ToS) field set to Interactive Voice and the LOW_Priotity_src node has the ToS field set to Excellent Effort. Router1 and Router2 prioritize the traffic using the WFQ settings configured in the "QoS Attribute Config" node.

Results Analysis: The graphs clearly highlights the idea of reduction in End-to-End

Delay by prioritizing traffic.

Scenario 2: Voice over IP - Using CODECS

The network in this scenario examines the link capacity while having Speech Activity Detection enabled and disabled, in the voice application. The statistics were obtained in two simulation runs. In the first run, Voice_src1 and Voice_dest1 nodes used G.711 as the encoder. Voice_src2 and Voice_dest2 used G.729 as the encoder. In the second run, the same encoders were used, but with Speech Activity Detection mode disabled. The statics shown are the Cumulative Distribution Function of the utilization of the Common Link. It can be easily inferred from the plot, that more calls can be accommodated, when silence suppression is used.

Scenario 3: Voice over IP - SIP

This scenario is explained here to elaborate the SIP Call Network

Following attributes were configured to use SIP:

Node: Appl-Config

Attribut: Application Définitions->Descriptions->Voice->Signaling

Node: San_Francisco.Caller

Attributes: SIP UAC Parameters->UAC Service

SIP UAC Parameters->Proxy Server Specification

Node: Pittsburgh.Callee

Attributes: SIP UAC Parameters->UAC Service

Node: Dallas.Proxy Server

Attributes: SIP UAC Parameters->UAS Service

Sample statistics for SIP are shown. These indicate the call setup time and duration as measured on the caller node and the active call count as recorded on the proxy server.

Sample Calculation for VoIP Traffic

Assume that G.711 is used as the encoder scheme. Its parameters are:

Frame Size : 4 msec

Look ahead Size: 0 msec

DSP Ratio : 1.0

Coding Rate : 54000 bits/sec

Number of Frames per Packet : 1

dsp_time = DSP Ratio * Frame Size = 4 msec

Steady state packet inter-arrival time

= dsp_time

= 4msec

Number bytes/packet = number_of_frames_per_packet * coding rate * Frame Size

= 1 * 54000 * 4msec

= 32 bytes/pkt

Average Traffic Sent (packets/sec) = 1/4 msec = 240 packets/sec

Average Traffic Sent (bytes/sec)

= 32 * 240

= 8000 bytes/sec

Chapter 5

Results

Quality of Service Measures performance for a transmission system that reflects its transmission quality and service availability. In the previous chapter different techniques were used to see the behavior and results of the networks using data and voice services. Based on previous techniques here are some facts about the techniques which were used in previous chapter.

First in First Out (FIFO) Queuing

First In, First out (FIFO) Queuing—Packets are forwarded in the same order in which they arrive at the interface.

Priority Queuing (PQ)

PQ ensures that important traffic gets the fastest handling at each point where it is used. It was designed to give strict priority to important traffic. Priority queuing can be used to manage limited resources such as bandwidth on a transmission line from a network router. In the event of outgoing traffic queuing due to insufficient bandwidth, all other queues can be halted to send the traffic from the highest priority queue upon arrival. This ensures that the prioritized traffic (such as real-time traffic, e.g. a RTP stream of a VoIP connection) is forwarded with the least delay and the least likelihood of being rejected due to a queue reaching its maximum capacity. All other traffic can be handled when the highest priority queue is empty. Another approach used is to send disproportionately more traffic from higher priority queues. Usually a limitation is set to limit the bandwidth that traffic from the highest priority queue can take, in order to prevent high priority packets from choking off all other traffic. This limit is usually never reached due to high level control instances such as the Cisco Callmanager, which can be programmed to inhibit calls which would exceed the programmed bandwidth limit.

Weighted Fair Queuing (WFQ)

It is a flow-based queuing algorithm that does two things simultaneously: It schedules interactive traffic to the front of the queue to reduce response time, and it fairly shares the remaining bandwidth between high bandwidth flows. WFQ is a generalization of Fair Queuing (FQ). Both in WFQ and FQ, each data flow has a separate FIFO queue. In FQ, with a link data rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced simultaneously, each at an average data rate of R / N. Since each data flow has its own queue, an ill-behaved flow (who has sent larger packets or more packets per second than the others since it became active) will only punish itself and not other sessions.

Random Early Detection

RED addresses these issues. It monitors the average queue size and drops (or marks when used in conjunction with ECN) packets based on statistical probabilities. If the buffer is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. When the buffer is full, the probability has reached 1 and all incoming packets are dropped.

CODECS

CODECS as part of maintaining the QoS plays a vital role in VoIP Networks. To Improve the Flow of VoIP Data Packets it is necessary to decide which CODECS works best and provides an excellent effort networks.

Chapter 6

Conclusion

The Study provides a comprehensive analysis and research over the Voice over IP Technology. Previously the chapters have elaborated the factors of Quality of Services and their affects. And few queuing model has been elaborated to make it better, but over all there are so many techniques has been implemented for making the quality of services of VoIP to improve. As the limit of bandwidth is getting increased, New CODECS has been introduced, latencies has been removing by using different communication algorithm. Over all the Quality of service of VoIP has greatly improved and emerging to new horizon of Next generation networks.

References

Periocical Articles / Research papers

  1. Raphael M. Bahati and Michael A. Bauer, Department of Computer Science , The University of Western Ontario , London, Ontario N5A 4B7,CANADA, frbahati, bauerg@csd.uwo.ca, VoIP in 3G Networks: An End-to- End Quality of Service Analysis by Renaud Cuny1, Ari Lakaniemi2 Quality of Service Provisioning for VoIP Applications with Policy-Enabled Differentiated Services
  2. Frenzel, Louis E "Codec and SLIC Merged Into Single-Chip VoIP Solution", The VoIP Solution, March 23, 1992, Sect. 1, p. 1, col. 4
  3. Andrew Loius "VoIP codecs and QoS", The Journal, September 12, 2004, The VoIP Technology.
  4. Venkatesh ahadevan Senior Research Associate, University of Technology Sydney, mahadevn@eng.uts.edu.au , A Case Study For Performance Analysis of VoIP systems
  5. Zizhi Qiao, Lingfen Sun, Nicolai Heilemann and Emmanuel Ifeachor Centre for Signal Processing & Multimedia Communications, School of Computing, Communications and Electronics, University of Plymouth, Plymouth PL4 8AA, UK, A new method for VoIP Quality of Service control use combined adaptive sender rate and priority marking by
  6. International Telecommunication Union. Coding of speech at 8 kbit/s using conjugate-structure algebraic-code-excited linear-prediction. Recommendation G.729, Telecommunication Standardization Sector of ITU, Geneva, Switzerland, March 1995.
  7. International Telecommunication Union. Pulse code modulation (PCM) of Voice frequencies. Recommendation G.711, Telecommunication Standardization Sector of ITU, Geneva, Switzerland, November 1998.
  8. S. T. Anton Kos, Borut Klepec, IEEE, 2002., Techniques for performance improvement of VoIP applications.
  9. C. Systems. Cisco IP telephony QoS design guide.
  10. S. Wenyu Jiang, Kazummi Koguchi, IEEE, 2003. QoS evaluation of VoIP end-points.
  11. Nortel Networks i2040 Software Phone. http://www142.nortelnetworks.com/bvdoc/i2040/p0508404 2.00.pdf
  12. http://www.voip-info.org/tiki-index.php
  13. http://www.experts exchange.com/Networking/VoIP_Voice_over_IP/
  14. IETF Session Initiation Protocol (SIP) Working Group, http://www.ietf.org/html.charters/sip-charter.html
  15. IETF Differentiated Services (DiffServ) Working Group, http://www.ietf.org/html.charters/diffservcharter.html
  16. VoIP in 3G Networks: Renaud Cuny, Ari Lakaniemi , 2004 ,An End-to- End Quality of Service Analysis by Quality of Service Provisioning for VoIP Applications with Policy-Enabled Differentiated Services by Raphael M. Bahati and Michael A. Bauer, Department of Computer Science , The University of Western Ontario , London, Ontario N5A 4B7,CANADA, frbahati, bauerg@csd.uwo.ca
  17. Venkatesh ahadevan Senior Research Associate, University of Technology Sydney, mahadevn@eng.uts.edu.au, 2003, A Case Study for Performance Analysis of VoIP systems by
  18. Zizhi Qiao, Lingfen Sun, Nicolai Heilemann and Emmanuel Ifeachor Centre for Signal Processing & Multimedia Communications, School of Computing, Communications and Electronics, March 2001, University of Plymouth, Plymouth PL4 8AA, UK, A new method for VoIP Quality of Service control use combined adaptive sender rate and priority marking
  19. A.H.Muhamad Amin IT/IS Department, Universiti Teknologi PETRONAS, 2004, The Second International Conference on Innovations in Information Technology (IIT'04)
  20. Kos, A., Klepec, B., and Tomažic, S. May 2002 "Techniques for Performance Improvement of VoIP Applications". IEEE MELECON 2002. Cairo, Egypt.
  21. Amir, Y., Danilov, C., Hedqvist, D., and Terzis, A., August 2004 "1-800 OVERLAYS: Using Overlay Networks to Improve VoIP Quality", Technical Report CNDS-2004-2.
  22. Miloucheva, I., Nassri, A., and Anzaloni, A., IPS 2004 D41 - 2nd Inter-Domain Performance and Simulation Workshop, "Automated Analysis of Network QoS Parameters for Voice over IP Applications".
  23. M. Palola, M. Jurvansuu, J. Korva; Singapore, 15-19 November 2004. "Breaking Down the Mobile Service Response Time", IEEE International Conference on Networks, icon2004.
  24. V. Paxson, Ph.D. dissertation, University of California, Berkeley, April 1997 "Measurements and Analysis of End-to-End Internet Dynamics".
  25. Periocical Articles / Research papers

  26. Ted Wallingford, Edition, 2003, Switching to VoIP,2nd
  27. S. T. Anton Kos, Borut Klepec, IEEE, 2002 ,Techniques for performance improvement of VoIP applications.
  28. C. Systems. Cisco IP telephony QoS design guide.
  29. QoS evaluation of VoIP end-points. IEEE, 2003. by S. Wenyu Jiang, Kazummi Koguchi.
  30. A. Tannenbaum, "Computer Networks", 4th edition, Prentice-Hall, 2003
  31. Website Refernces

  32. URL:http://www.opnet.com/itu_guru/
  33. URL:http://www.cisco.com.
  34. URL: http://www.ethereal.com
  35. URL: http://www.tcpdump.org
  36. URL: http://www.tpub.com/neets/book12/49l.htm
  37. URL: http://windump.polito.it
  38. URL: http://analyzer.polito.it
  39. URL: http://www.networkchemistry.com/products/packetyzer/index.html
  40. URL: http://www142.nortelnetworks.com/bvdoc/i2040/p0508404 2.00.pdf
  41. URL:http://www.opnet.com/itu_guru/