Multiple Virtual Servers And Node Prioritization Computer Science Essay

Published:

Abstract - The continuing growth of the World-Wide Web is placing increasing demands on popular Web servers. Due to this, there is an increase in the server load. We propose a dynamic load balancing (DLB) method using Multiple Virtual Servers and Node Prioritization for balancing the server load. The DLB method is based on the concept of virtualization, using which multiple virtual machines can be run on a single physical machine, sharing the resources of that single computer across multiple environments.

Keywords - virtualization, virtual servers, migration,target node.

I. Introduction

Rapid growth in use of computer has increased the

number of resource sharing application and ultimately increased the amount of load across internet. Problem can be solved by increasing the size of servers or distribute the applications in effective manner across different servers. Distribution is termed as load balancing. Load Balancing based on the idea of migration of excess load from heavily loaded node to lightly loaded ones. The problem starts with to determine when to migrate a process or task. This solution is typically based on local load situation[14].

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

A distributed system is a system managed by several machines which store data shared by users. The first problem is how to dispatch the workload on these various machines which have various capacities of processing. Beyond this problem, it is necessary to make an "equitable" distribution of the tasks on the level of the various servers (or nodes) for a better performance of the system, i.e. to prevent a node from becoming overloaded while others are in idle state.Processing a load balancing consists in distributing the tasks of the system between the various servers so that they have about the same quantity of data to treat[13].

In our approach we use multiple virtual servers to balance the load of the physical server.

II.RELATED WORK

A distributed system may consist of heterogeneous machines connected with heterogeneous networks; and the networks may be shared. Therefore, to efficiently utilize the computing resources provided by distributed systems, the underlying dynamic load balancing (DLB) scheme must take into consideration the heterogeneous and dynamic features of distributed systems[12].

An adaptive policy for selecting the nearest server in the server cluster for a client request is proposed in[4]. Proximity is assessed by the round trip delay between the cluster and the client,which is measured by different clusters sending probes to the client. The load balancing scheme employs proximity information, as well as information about server loads. Proximity information is used to provide a better response time. But proximity is estimated only when some client has several requests for the website.

[2] proposed a Load Balancing Technique based on heuristics.This load balancing algorithm requires heuristic in the form of a hash table. The hash table provides the average number of CPU clock cycles required by a task of a known time complexity.Through this hash table the server can determine the expected time required by any task to execute on any node.

The node prioritization algorithm identifies the nodes having higher performance and efficiency factors. The dynamic nature of the computation of efficiency factors makes the prioritization mechanism highly dynamic, thus providing the server with accurate priorities of nodes for task assignment.

III.OUR APPROACH

We propose a dynamic load balancing (DLB) method using Multiple Virtual Servers and Node Prioritization for balancing the server load. The DLB method is based on the concept of virtualisation, using which multiple virtual machines can be run on a single physical machine, sharing the resources of that single computer across multiple environments.

The target node for establishing virtual server in the selected network is determined by using Node Prioritization technique. Thus target node thus selected is the least loaded node in the network which can perform well than any other node in its network. Mobile IP is used for the communication between Physical server and virtual server.

When the physical server gets overloaded either an optimal virtual server is found or if no virtual server is available virtualisation is carried out to establish virtual server and upcoming requests to the physical server are redirected to the virtual server thus found. Using these concepts, we propose to address the load balancing issue comprehensively and provide a solution.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Figure1: Flow Chart

Network selection:

Whenever the physical server meets the overload condition, load should be balanced by redirecting some of the requests to the virtual servers. A network is said to be optimal based on its proximity with the physical server and its efficiency which is considered in terms of traffic.The optimal network is selected in order to find the site where target node is to be fixed to perform migration.The source network from where the physical server gets more request is identified. To find the optimal network we determine the shortest path between the physical server and the source network. Each network in the shortest path is analysed for better proximity and efficiency. Thus the resulting network is used to perform target node selection.

Network head selection:

The selected network is assigned with a Network Head which is selected with the help of Token Based Election Algorithm. The Network Head collects load status information from all individual nodes present in its network and maintains it. This helps in the determination of optimal node in the network which is the target node.When Network Head in the network crashes,a new Network Head is elected again.

Token Based Election Algorithm:

Step 1: Each node is assigned a unique identifier, which can be used to identify the node in the network.

Step 2: Each network should have only one head node at any instant of time. The heads are chosen based on their identifier value, the greater the identifier, the greater its priority to become a head.

Step 3: The networks are scanned at periodic intervals and for each network the following operations are performed.

Step 4: Scan each network and check for the presence of exactly one head node. If there is no head currently, go to step 5.

Step 5: This is the case when the head node has crashed. Now the election algorithm is initiated and next node with higher identifier value is chosen and is assigned as the head.

Step 6: Finally the node which is the head currently, holds the token and then puts its identifier value in the token, and circulates it in the network.

Step 7: Now all the nodes in the network will be updated with the new head of the network, and sends its statistical information to the head.

TARGET NODE SELECTION:

The nodes where virtual servers are to be established are found here. The physical server contacts the Network Head of the selected network. The Network Head in turn notifies the least loaded node in its network to the physical server.

The target node is selected within the selected network. The Network head gathers the system load information from all nodes in its network periodically and calculates the optimal node amongst them with the help of Node Prioritization Technique.

Here we define Node Prioritization Factors, whose main constituents are as follows:

1) CPU Idleness factor (It) : The fraction of CPU processing power that remains unutilized within a given time slot for a given node. For example, It=0.9 would mean that CPU remains free 90% of the times in the given time slot.

2) Processing power (p): Number of CPU cycles that can be performed by a processor per second (units MHz ,GHz).

3) Free memory availability factor (mf): The fraction of memory (RAM) that remains unutilized within a given time slot for a given node. For example, mf =0.54 would mean that memory remains free 54% of the times in the given time slot.

Out of these three factors It and mf are dynamic as their values will change with continuous monitoring of nodes and p is a static factor as processing power of a node remains constant. Based on the dynamic factors an Efficiency Factor (EF) is calculated as,

EF = It * mf

ALGORITHM FOR NODE PRIORITIZATION:

The following notations have been described as used:

p(i): Processor power of node 'i'

EF(i): Efficiency factor of node 'i'

pf(i,j): Fraction by which the processing power of ith node is greater than jth node = (p(i)-p(j))/p(j)

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

EFf(i,j): Fraction by which the efficiency factor of ith node is greater than jth node = (EF(i)-EF(j))/EF(j)

flag(i): flag is used as a counter for ith node

Nodes are prioritized according to the decreasing order of their flag value. The algorithm for Node Prioritization is given as follows:

For every node i, initialize flag(i)=0

a) If (p(i) >=p(j) and EF(i) >= EF(j)) then increment flag(i) by 1

b) If (p(i) >=p(j) and EF(i) < EF(j))

Compute pf(i,j) and EFf(j,i)

Compare(pf(i,j),EF(j,i))

If (pf(i,j) >= EFf(j,i) then increment flag(i) by 1.

c) If (p(i) < p(j) and EF(i) > EF(j))

Compute pf(j,i) and EFf(i,j)

Compare (pf(j,i),EFf(i,j))

If (pf(j,i) >= EFf(i,j) then increment flag(j) by 1.

The above algorithm identifies the nodes having higher performance and efficiency factors and assigns them flag values accordingly. Based on these flag values, the nodes are given priorities with the node having maximum value of flag as the one having maximum priority.

Thus the Network head maintains the optimal node in its network.The physical server when about to perform virtualization contacts the Network Head of the selected network. The Network Head in turn notifies the least loaded node which is the optimal node in its network to the physical server.

LOAD CALCULATION

The load status of physical server is monitored periodically for the overload condition.When the physical server gets overloaded an optimal virtual server is found and the upcoming requests to the physical server are redirected to it. If there is no virtual server available , a new virtualisation process is carried out and a virtual server is established in the selected network and the selected node. Now the upcoming requests to the physical server are redirected to the newly established virtual server.

PARAMETERS FOR FINDING THE LOAD OF PHYSICAL SERVER:

1) CPU load average

2) Number of requests per second

CPU LOAD AVERAGE

It is defined as the average number of processes that are either in a runnable or uninterruptable state.

A process in a runnable state is either using the CPU or waiting to use the CPU.A process in an uninterruptable state is waiting for some IO access ,for example waiting for disk.

NUMBER OF REQUESTS PER SECOND

It denotes the number of requests received by the server per second. It is obtained using the web server's access log file. For each received request an entry is made in the log file.Initially the total number of requests from the log file is found. It is calculated again after one second. Number of requests per second is obtained by finding the difference between these values.

OVERLOAD CONDITION:

When the calculated values of CPU load average and Number of requests per second exceed the threshold values, the server is found to be overloaded.

Now the load balancing of physical server is done by redirecting the upcoming requests of physical server to either optimal virtual server or newly established virtual server.

ESTABLISHMENT OF VIRTUAL SERVERS:

The physical server establishes virtual server in the selected target node.

Migration is the transferal of a running virtual domain from one physical host to another. Live migration allows a server administrator to move a running virtual machine

or application between different physical machines without disconnecting the client

or application. The actual migration is carried out using Xen. The server's OS and the resources are migrated to the target node.

In the case of popular websites, the

web server gets plenty of requests. Hence both the physical server and the virtual server may get overloaded. To balance the load and to increase the response time multiple virtual servers are established based on the need.

REQUEST PROCESSING:

The physical server processes the request when its load is less than threshold value. In the other case the physical server gathers load status of virtual servers and finds the optimal virtual server amongst them. If any such virtual server is found the request is redirected to the optimal virtual server and it replies to the client directly thus balancing the load of physical server.

In case if no optimal virtual server is found another virtualization process is carried out.

SELECTION OF OPTIMAL VIRTUAL SERVER

The load status of physical server is monitored periodically for the overload condition.When the physical server gets overloaded an optimal virtual server is found amongst the available virtual servers and the upcoming requests to the physical server are redirected to it.

The physical server prompts load status information from all established virtual servers. The virtual servers in turn reply with their load status information such as CPU load average and Number of requests per second. The physical server calculates the Round Trip Time by calculating the difference between the time that the message is sent and the time that the reply is received. Now the physical server calculates the optimal virtual server by comparing the load status information sent by all virtual servers and redirects the upcoming requests to it.

The threshold value for CPU load average is fixed. The CPU load average of optimal virtual server which is calculated by the algorithm stated below when exceeds the threshold value, another virtual server is selected as the optimal server by the algorithm.

ALGORITHM FOR FINDING OPTIMAL VIRTUAL SERVER:

NOTATIONS:

t - threshold value for load average

load - load of processing virtual server

cload - load of calculated optimal virtual server

rtt - round trip time of processing virtual server with respect to physical server

crtt - round trip time of calculated optimal virtual server with respect to physical server

ALGORITHM:

STEP 1:

if((load<cload) and (load<t))

If(crtt<rtt)

Optimal VS=calculated VS

Else

Optimal VS=processing VS

STEP 2:

if((load>cload) and (load<t))

If(crtt>rtt)

Optimal VS=processing VS

Else

Optimal VS=calculated VS

STEP 3:

if(load==cload)

If(crtt<rtt)

Optimal VS=calculated VS

Else

Optimal VS=processing VS

When a virtual server fails,the physical server redirects the upcoming requests for this virtual server to optimal virtual server found by the algorithm stated above. When the status of virtual server is found as failed,the IP address entry maintained by physical server for this virtual server is made null.

IV RESULTS

Two different analyses were done to determine the feasibility of the load balancing mechanism described in this paper. One of them was finding out if the migration policy is feasible and the other was determining if the method of load balancing produced any tangible results in terms of decrease in response time of the server.

TIME OF TRANSFER

A study was done on the time taken to transfer files of different sizes during migration.

Figure 6(a)

Figure 6(a) shows that even as the file size increases drastically, the time of transfer of the file varies by a very small amount only. This shows that migration of the OS, which is equivalent to a large file transfer, does not take too much time, and hence is feasible.

RESPONSE TIME

Another analysis was done regarding the change in response time of the server before and after load balancing technique was applied.

As shown in figure 6(b), the response time of an overloaded server, before applying the load balancing technique, keeps increasing rapidly. In contrast, the response time after balancing the load, becomes very low. This proves the efficiency of the load balancing mechanism.

Figure 6(b)

V CONCLUSION

An effective load balancing method is proposed, which is based on server load. The basic features needed for a load balancer are taken care and implemented.On receiving the incoming requests the real server checks for the load status and the status of the real server and based on the status, the requests are processed. Depending on the no of requests that arrive most, from a particular network, an image of the server and the resource is migrated to the optimal network which resides between the source network and the network where the real server exist.

A priority is assumed to each of the virtual servers, depending on the load on the respective virtual server. This priority value is consulted while redirecting the request from the real server. A management system is provided by the real server that processes the incoming requests, analyses the traffic from a particular network, and keeps track of the dynamic network topology and takes migrating decisions. The management system receives an update from each of the virtual servers, in the event of a significant change in its load.

The main drawback observed from the results is that

the limit value (the number of requests after which the current server has to stop execution, and a new virtual server found) has to be optimal. If the limit value is too low, change of server will happen too frequently, and that is an overhead. If, on the other hand, the limit value is too high, it is possible that most of the requests go to one server, which would result in the other servers being idle for long, and decrease the effectiveness of load balancing. The limit, therefore, has to be optimal, so as to ensure that load is distributed evenly among the servers, and at the same time, change of current server does not happen too frequently.

VI FUTURE WORK

Other parameters apart from the number of requests at each server, such as the processing speed of the CPU, estimated time for each request to be processed, can be introduced. Some priority can be introduced in processing the requests in the queue of every server. This will enhance the load distribution, taking into consideration the heterogeneity of requests.