Study On Scheduling Algorithms In Wimax Computer Science Essay

Published:

It assumes a communication model between the Base Station (BS), which can act as the server in the regular networks, and one or more Subscriber Stations (SS), which will act as the clients. Base Station (BS) is the head of the network which controls and manages the entire system and performs most of the system decisions for both the physical (PHY) and Medium Access Control (MAC) level and being responsible of establishing the connections between Subscriber Stations (SSs) and itself, and of allocating available resources in an efficient way to increase the overall system utilization and to fulfill the system and application requirement.

In WiMAX there are two operational modes: point-to- multipoint (PMP) or mesh. In PMP mode the SSs can only communicate through the BS, where in the mesh mode they can communicate to each other and to the BS. In the PMP the provider has more control on the QoS requirements which is a hot topic of research especially for the real time systems.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The IEEE 802.16e standard has defined five scheduling classes for the different types of traffic with different QoS requirements ranging from a highly strict QoS requirements to traffic that needs a relaxed QoS requirements, so that the WiMAX can support different applications like voice, video, data, FTP, etc. Each class has its requirements and constraints for the bandwidth, packet loss, delay and jitter. Those classes are: Unsolicited Grant Service (UGS), real-time Polling Service (rtPS), extended real-time Polling Service (ertPS), non-real-time Polling Service (nrtPS), and Best-Effort (BE).

Unsolicited Grant Service (UGS): It is designed for services which require Constant Bit Rate (CBR) e.g. voice applications and T1/E1.

QoS requirements for this class:

Minimum reserved rate

Jitter tolerance

Maximum sustained rate

Traffic priority

Maximum latency tolerance

Real-Time Polling Service (rtPS): It is designed for services which generate variable size data packets but delay requirements should be met e.g. video Streaming, E-learning.

QoS requirements for this class:

Minimum reserved rate

Maximum sustained rate

Traffic priority

Maximum latency tolerance

Extended Real -Time Polling Service (rtPS): it is a scheduling service that combines the efficiency of both UGS and rtPS services. The ertPS scheduling service is designed to support real-time applications that generate variable size data packets on a periodic basis, such as VoIP with silence suppression.

QoS requirements for this class:

Minimum reserved rate

Maximum sustained rate

Traffic priority

Maximum latency tolerance

Jitter tolerance

Non-Real-Time Polling Service (nrtPS): It is designed for services which require good average data rate performance but can tolerate delay e.g. FTP, document sharing.

QoS requirements for this class:

Minimum reserved rate

Maximum sustained rate

Traffic priority

Best Effort (BE) service: It is designed for services which don't require any specific QoS guarantee e.g. HTTP and Web Browsing.

QoS requirements for this class:

Maximum sustained rate

Traffic priority

Although the IEEE 802.16 standard classifies the different classes for different QoS requirements, the standard did not define the scheduling algorithms to be used in each class, keeping it as a hot research topic. Manufacturers can implement their own scheduling algorithm depending on their particular requirements. In literature, many scheduling algorithms have been proposed.

Scheduling algorithms

A scheduling algorithm is the operation of allocating the bandwidth among the users (SSs) and their transmission order to achieve the best system performance in terms of QoS requirements such as packet loss, delay, jitter and other QoS. Scheduling algorithms for a particular network need to be selected based on the type of users in the network and their QoS requirements. QoS requirements vary depending on the type of application/user.

Scheduling algorithms are implemented at both the BS and SSs and different algorithms can be implemented at each side at the same time. At the SS a downlink algorithm is only concerned in communicating the decision locally to the BS.

In this paper we are focusing on scheduling algorithms executed at the BS for the uplink traffic in WiMAX which is the traffic from the SSs to the BS. This type of scheduling algorithms is facing some challenges not faced by an algorithm for the downlink traffic. An uplink scheduling algorithm does not have all the information about the SSs such as the queue size. An uplink algorithm at the BS has to coordinate its decision with all the SSs where as

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

To design or choose one of the scheduling algorithms, we have to consider the network behavior and the QoS requirements for that network. There are some factors that is usually used to evaluate and compare the good and suitable algorithms, some of those factors include:

Flexible: a Scheduling algorithm should be capable to provide different QoS requirements and also meet the minimum requirements of users. The good design should be flexible enough so that it requires little or no changes to be used in a different network or over a different technology.

Simple: both theoretically and technically. Theoretical simplicity simplifies the analysis of the algorithm such that distribution or worst case analysis bound for parameters such as delay and throughput can be derived. Technical simplicity allows efficient implementation of the algorithm on a large scale.

Protection: once they are admitted into the network, users enter into a service level agreement (SLA) that they will stick on (e.g., a user will specify peak and mean traffic rates). Sometimes a user will not act in accord with the SLA, causing unpredicted traffic fluctuations in the network. A scheduling algorithm needs to ensure that such fluctuations do not affect well-behaving users in the network. A scheduling algorithm needs to be able to protect users from sources of variability such as best effort traffic, misbehaving users and fluctuations in the network load.

Fairness: Fairness measures the difference between users with respect to the resources allocated to them. Besides the QoS requirements, a scheduling algorithm needs to ensure that a reasonable level of fairness is maintained between the users. That is it does done starve some users to satisfy other with heavy traffic loads. In a wireless network, due to the presence of variations in channel quality, users experiencing poor channel quality might be denied service by the scheduling algorithm. This is because bandwidth allocated to users with poor channel quality will essentially be wasted as the data will be lost or corrupted prior to reaching the destination. A scheduling algorithm needs to have a mechanism to compensate users that have lost service and maintain fairness among all users.

Utilization: Link utilization is a critical property for the providers as it is directly linked to the revenue generated. A scheduling algorithm needs to assign bandwidth such that the maximum link utilization is achieved and guarantee that resources are not allocated to users that do not have enough data to transmit, thus resulting in wastage of resources.

Packet loss/Delay: occurs when one or more packets of data travelling across fail to reach their destination.for some systems, it is the main factor to consider when designing or choosing the scheduling algorithms. The packet loss can calculate number of delivered data packets as a function of the traffic load submitted to the network. While the end-to-end delay can calculate the total amount of time the packet takes to reach its destination from its generation.

Overview of some scheduling algorithms

There are a lot of well knows and proposed scheduling algorithms for WiMAX networks. Some of them are applied for different technologies and areas like the routing protocols or the processor scheduling algorithms and many other applications. In the following section we will discuss some of them briefly regardless of the categories that some researches made to group those algorithms.

Round Robin (RR) scheduler is the simplest and one of the well known scheduling algorithms. It considers all the SSs with equal priority and distributes equal channel resources to all the SSs. Each SS will have the same quantum of time and resources as the others in turns and after that quantum, it will wait for the next quantum in the next round after serving all other SSs no matter what type of traffic the SS is holding. The RR scheduler is simple and easy to implement. However, this technique is not suitable for systems with different levels of priority and systems with strongly varying sizes of traffic.

Weighted Round Robin (WRR) scheduling algorithm, this algorithm is executed at the beginning of every frame at the Base Station (BS). At the start of a frame, the WRR algorithm allocates the bandwidth among the SSs based on their weights which is assigned to reflect the relative priority and QoS requirements of the SSs.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

The key issue of the WRR scheme is to assign weights to each SS in a fair and suitable way because weights are assigned to reflect the relative priority and QoS requirements of the SSs.

It is a work-conserving algorithm in that it will continue allocating bandwidth to the SSs as long as they have backlogged packets. But since the bandwidth is assigned according to the weights only, the algorithm will not provide good performance in the presence of variable size packets. WRR does not provide good fairness properties over timescales shorter than a round trip time [21,22].

At shorter timescales, if some connection has a smaller weight or the number of connections is very large, it will lead to long periods of unfairness. i.e. a higher weight assigned to SSs of the rtPS class compared to the weight assigned to SSs of the nrtPS and BE classes.

Deficit Round Robin (DRR) algorithms to overcome the problems of WRR, the Deficit Round Robin (DRR) scheduler has been proposed. It is also simple and easy to implement scheme that can provide good fairness characteristics. It can handle variable size packets without knowing the average packet size. Similar to the RR algorithm, it assigns a quantum of service to each SS. The difference between the two algorithms is that when a SS is not able to send a packet, the remainder quantum is stored in a deficit counter. The value of the deficit counter is added to the quantum in the following round. The algorithm is flexible enough as it allows provision of quanta of different sizes depending on the QoS requirements of the SSs. DRR is mostly suited for datagram networks where packet sizes vary. Since the DRR algorithm requires accurate knowledge of packet size, it is not suitable for the uplink traffic. Usually WRR is evaluated for the uplink traffic while DRR is evaluated for the downlink traffic.

Weighted Fair Queueing It is a flow-based queuing algorithm that assigns the packets to different queues and assigns a weight to each queue; the queues are served based on their wights. WFQ ensures that queues are not starved for bandwidth and that traffic gets predictable service. Low-volume traffic streams comprise the majority of traffic and receive preferential service. High-volume traffic streams share the remaining capacity proportionally. The main advantages of WFQ that it never wastes bandwidth. If there's no higher priority traffic, lower priority traffic is forwarded and the Weight (cost) of queue can be modified dynamically.

Both WFQ and WRR share the idea of assigning weights to the SSs. The WFQ differ from the WRR in that it also considers the packet size and the channel capacity when allocating bandwidth to the SSs. An arriving packet is tagged with finish time that is calculated based on the weight of the SS, the packet size and the uplink channel capacity. In WFQ, the weight of a SS is calculated in the same way as it is in WRR. Once the weight is assigned, arriving packets of the SS are stamped with virtual finish time.

The disadvantage of the WFQ is the complexity; this is high due to two main reasons: selection of the next queue to serve and computation of the virtual time.

This section may be will removed, or to explain it in a better way!!

Random Early Detection (RED): also known as random early discard or random early drop is an active queue management algorithm. It is also a congestion avoidance algorithm. RED behaves as follows, once a link is filling up when TCP/IP session starts, RED starts dropping packets with probability which indicate to TCP/IP that the link is congested and it should slow down. Once the link is completely saturated, it behaves like a normal traffic police. It monitors the average queue size and drops packets based on statistical probabilities, if the buffer is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. When the buffer is full, the probability has reached 1 and all incoming packets are dropped.

In traditional tail drop algorithm, a network component buffers as many packets as it can, and simply drops the ones it cannot buffer. If buffers are constantly full, the network is congested. Tail drop distributes buffer space unfairly among traffic flows.

RED makes QoS differentiation impossible. Weighted RED provides early detection with some QoS considerations.

Weighted random early detection (WRED) is an extension to Random early detection (RED) where different queues may have different buffer occupation thresholds before random dropping starts well as different dropping probabilities, and packets are classified into these queues according to priority information. In this way QoS differentiation is made possible, since packets in queues with higher buffer occupation thresholds or lower dropping probabilities are effectively prioritized.

Earliest Deadline First (EDF) scheduler: The algorithm assigns deadline or due date to each packet and allocates bandwidth to the SS that has the packet with the earliest deadline. It is a dynamic scheme, in which the priority of scheduling a packet increases with the amount of time it spends in the queue. This algorithm has been proposed originally for real-time systems.

Deadlines can be assigned to packets based on the SS's maximum delay and QoS requirement. This algorithm is best suitable for the traffic of UGS and rtPS classes, because SSs in these classes have strict delay requirements. On the other hand, for the nrtPS and BE services which does not have a strict delay requirement, the EDF algorithm will schedule packets from these SSs only if there are no packets from SSs of UGS or rtPS classes which will cause a starvation for the nrtPS and BE stations if there is a large number of SSs from the UGS or rtPS classes.

There are other hybrid algorithms where we can combine two or more scheduling algorithms to overcome some of the disadvantages or limitations of the algorithms. Hybrid algorithm like

(EDF+WFQ+FIFO), (EDF+WFQ) and other hybrid protocols. From their names we can draw a picture of their concepts if we understand each one alone.

In our study, we will consider three main algorithms which are: EDF, WRR, and WFQ. We choose those algorithms to cover three totally different techniques: Round Robin, Fair Queuing, and Dead line "time". Besides that, we will simulate and analyze only two service classes which are: rtPS and ertPS. That means we will concentrate only on the real time systems.

Performance Metrics

We have specified several metrics to assess the performance of the scheduling algorithms. The following metrics have been defined:

Complexity Analysis

Frame utilization

Average Throughput per SS

Average Delay per SS

Packet Loss per SS

Fairness: which is measured between users of the same traffic class (intra-class fairness)

Complexity Analysis

There are many researches worked on the complexity analysis for different scheduling algorithms. Here we will relay on their output without going into the detailed analysis of their work and we will mention the reference for each output, so the reader can refer to it in case he is interested in this parameter.

WRR algorithm is known to have a constant complexity with respect to the number of SSs i.e. O(1) [33]. On the other hand, [34] showed that the complexity of the WFQ algorithm is O(N), where N is the number of SSs. finally the complexity of the EDF algorithm is also O(N) [35] which means that it depends on the number of SSs.

WRR is the simplest scheduling algorithm between the three we chooses because as in the literature it is independent of the number of the SSs. While the other two: WFQ and EDF are proportional to the SSs.

Frame utilization

This parameter studies the performance of the algorithms with respect to frame utilization when the number of SSs is increased. ertps and rtps have the same frame utilization for each algorithm when there are a large amount of traffic is generated.

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\utilization.jpg

From the output, EDF and WFQ are performing close to each other for small number of SSs (here 4) where the utilization almost 65% of the channel, and also close for large number after 20 SSs where it is almost 40%. There is a noticeable better utilization in between (between 4 and 20 SSs) for the WFQ. It is clear that WRR is far away from the other two especially we can notice the big drop in the performance when we reached 10 SSs where the performance continues to decrease the utilization till it reaches 5% when there are 20 SSs.

Average Throughput per SS

The average throughput, decreases with an increasing number of SSs as a result of decreasing load per SS and increase in bandwidth wasted by the uplink. For fewer SSs and in both traffic classes, the algorithms show a higher average throughput for SSs. All algorithms indicate the highest average throughput for the ertPS class.

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\Avg Throu - rtps.jpg

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\Avg Throu -ertps.jpg

As seen in the figure, the WFQ algorithm indicates almost identical average throughput for SSs of the rtPS class when compared with EDF, This behavior is due to the high MRTR of SSs of the rtPS class, allowing the WFQ algorithm to allocate a large amount of bandwidth for them. However, with a large number of SSs (greater than 24), the WRR algorithm indicates lower average throughput. The WRR algorithm results in very low average throughput because of the low MRTR of SSs of the class and the large packet size.

Average Delay per SS

As expected, the average delay factor increases with increasing number of SSs due to increasing overhead of the uplink and increasing number of SSs.

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\Avg Dealy - rtps.jpg

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\Avg Dealy - ertps.jpg

The WRR algorithm shows the highest average delay with a large number of SSs (greater than 18). This is due to the nature of this algorithm which partition the bandwidth according to the MRTR and number of SSs, and selecting large number of SSs every frame, the amount of bandwidth allocated for SSs of ertPS and rtPS classes will be less.

The minimum average delay can be seen is 10ms, as in the EDF algorithm, this is the allocation start time value. The minimum amount of time a packet will spend in the queue is the start of the uplink allocation, which is the value of allocation start time.

Packet Loss per SS

As the average delay factor, the packet loss increases with increasing number of SSs due to increasing overhead of the uplink and increasing number of SSs.

Figure showed how much EDF is performing better in terms of packet loss, there is a big difference once the number of SSs reaches 18 since the WFQ and WRR start to increase the packet loss exponentially. EDF starts to be affected by the number of SSs after 22 SSs and reaches less than 20% when there are 36 SSs. Comparing to WFQ and WRR which have more that 60% at the same number (36 SSs), EDF is extremely doing better.

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\packet loss - rtps.jpg

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\packet loss - ertps.jpg

Fairness

In contrast, the fairness among SSs of the ertPS class decreases with an increasing number of SSs. The WFQ algorithm indicates the lowest fairness between the three algorithms for ertPS SSs due to the burst nature of ertPS traffic nature such as voice traffic. WRR, since it is based on Round Robin as a basic block, it provides a good fairness although EDF is also perform better since it is serve all the SSs on time, that each SS has a strict dead time will get the channel.

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\fairness - rtps.jpg

C:\Users\baderm\Documents\University\Project\EDF\New folder\Simulation\fairness - ertps.jpg

Conclusion

EDF

WRR

WFQ

Complexity

O (log N)

O (1)

O (N)

Frame utilization

High

Low-Medium

High

Avg. Throughput

High

Low

Medium

Avg. Delay

Low

High

Medium

Packet loss

Low

Medium

High

Fairness

High

Medium

High

From the table, we can clearly notice how good EDF is performing among the three algorithms in real time systems as it indicates low delay and packet loss and high average throughput fairness, although the EDF algorithm has disadvantages of implementation complexity and the high overhead as we will show in the next sections.

This is to but some conclusion about the results showing how EDF is good !!

Contribution

More about EDF

As described earlier, this algorithm is based on the priority issue, which is more suitable and efficient for real-time systems. In this algorithm, each real time traffic from each SSs will be given a specific priority based on the remaining time for the packet to expire, that is the packet with its deadline is closer to expire will be given a higher priority, so it will be served first.

Even though the EDF is very efficient in guaranteeing the QoS requirements for real-time traffic, some studies and researches realize some disadvantages of the EDF scheduler, which can be summarizes as:

the effect of traffic overloading in dynamic environment will make the EDF scheduler inefficient in serving the real-time data packets, and a high miss ratio will be achieved, [Comparing FCFS & EDF]

High overhead in some cases due to the frequent context switches among the SSs.

The starvation problem, where the best-effort traffic will not be served if there is any real-time traffic.

EDF give a fixed miss ratio for all data flows without looking to the specifications to the real-time packets, so the QoS guarantees will not be met for some real-time flows

Some points of the disadvantages are out of the scope of our work, we are concentrating on real time systems ignoring the BE, UGS and nrtPS.

In order to enhance, improve or modify the EDF algorithm we need to know how the algorithm is working and looking deep into each step. The algorithm can be written as the following pseudo-code:

First, the scheduler will be in the idle state waiting until packets arrived.

When some packets arrive, the scheduler starts by checking their types:

Non-real time traffic: it just serves them without any conditions. It will serve the first arrived packet among them (FCFS scheduling).

Real time traffic: it will check among these packets which of them is closer to be expiring, and serve them first.

If there were real-time and non real-time packets then it deals with the real-time packets only.

If the current frame is full and still there are some packets with dead times= 0: Drop them.

If there is some free slots in the current frame, and there is no packets with dead time=0: switch to the SS that has packets for the next time frame, "packets with dead time =1" if no packets for dead time=1, then move to 2 and so on.

As noticed in point (iv), it is the reason for the EDF disadvantage point number 4: "High overhead in some cases due to the frequent context switches among the SSs". Switching from one SS to another will cause an overhead due to allocating the channel back and forth between the SSs, and furthermore, checking which connection has traffic of the next dead time is another overhead also. Our contribution will consider these two issues and try to reduce their effects on the overall overhead of the system without affecting any other parameters except some acceptable packet loss as we will see in the next section.

Contribution:

In the EDF scheme, it choose the packets to be scheduled in the current frame according to their dead time (urgency), starting with the most urgent packets, and adding packets with later deadline in case there is still free slots in the current frame. This scheme aims at minimizing the amount of lost packets, since we always take the more urgent requests first. Indeed, as shown in [4] the EDF is delay optimal, yet a more careful examination reveals that in the long run, such approach might not yield optimal utilization of the physical resources.

It is obvious that EDF has poor system performance due to the frequent context switches among the SSs. In stage ?? the algorithm will schedule packets from other SSs that have packets to be scheduled in the future time slots. Besides that, the scheduler has to compare the existing data in order to choose which SS to allocate.

In our contribution, we are trying to reduce the overhead of the comparison and the context switching.

The modified algorithm is working as the following:

Drop packets of ertPS and rtPS SSs that have missed their deadline

Assign deadline upon arrival of a packet.

Assign bandwidth to the SS with the packet with earliest deadline

If the frame is full and there are still some packets that need to be scheduled in the current slot, drop them.

If all packets for the current time are served and there is still free time slots,

consider the next earlier packets from the current (last) SS.

If there is no more packets for this SS, consider the next earlier packets from any random SS.

The reason for vii, that is since there is still available time limit on the packets, it will not the SS

Show the weaknesses of EDF

Explain in details the EDF algorithms and how it works

Show my contribution and modifications

Simulation results comparing EDF with the modified one.