Load Management in distributed systems

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

1- Introduction

Before going into the details of Load Management in distributed systems, first client/server communication will be discussed. A client is a single-user workstation that provides presentation services and the appropriate computing, connectivity, and database services and interfaces relevant to the business need. A server is one or more multi-user processors with shared memory providing computing, connectivity, and database services and interfaces relevant to the business need. Client/server computing is an environment that satisfies the business need by appropriately allocating the application processing between the client and the server processors. The client requests services from the server; the server processes the request and returns the result to the client. The communications mechanism is a message passing interprocess communication (IPC) that enables (but does not require) distributed placement of the client and server processes. Client/server is a software model of computing, not a hardware definition. Because the client/server environment is typically heterogeneous, the hardware platform and operating system of the client and server are not usually the same. In such cases, the communications mechanism may be further extended through a well-defined set of standard application program interfaces (APIs) and remote procedure calls (RPCs). Effective client/server computing will be fundamentally platform-independent. The user of an application wants the business functionality it provides; the computing platform provides access to this business functionality. There is no benefit, yet considerable risk, in exposing this platform to its user. Changes in platform and underlying technology should be transparent to the user. Training costs, business processing delays and errors, staff frustration, and staff turnover result from the confusion generated by changes in environments where the user is sensitive to the technology platform.

It is easily demonstrated that systems built with transparency to the technology, for all users, offer the highest probability of solid ongoing return for the technology investment. It is equally demonstrable that if developers become aware of the target platform, development will be bound to that platform. Developers will use special features, tricks, and syntax found only in the specific development platform.

Most of the LANs are based on switched IEEE 802.3. The network connecting two or more LANs is referred to as a WAN Ethernet technology, running at 10, 100 or 1000 Mbps. Each node on LAN has its own computing power but it can also access other devices on LAN subject to the permission it has been allowed. These could be data, processing power and the ability to communicate or chat with other users in the network. Intermediate nodes i.e. repeaters, bridges and switches allow LANs to be connected together to form larger LANs. A LAN may also be connected to another LAN or to WANs using a router. IP (Internet Protocol) enables communication between computers on the Internet by routing data from a source computer to a destination computer. However computer-to-computer communication only solves half of network communication problem. For example an application program such as mail program to communicate with another application such as a mail server there needs to be a way to send data to specific programs within a computer. Ports are used to enable communication between programs. A port is an address within a computer. Port addresses are 16-bits addresses that are usually associated with a particular application protocol. An application server, such as Web Server listens on a particular port for service requests, performs whatever service is requested for it, and returns information to the port used by the application program requesting the service.

In client/server communication the main point is to allow two machines to connect and talk to each other. Once the two machines have found each other they can have a nice, two-way conversation. But how do they find each other, one machine has to stay in one place and listen while the other machine seeks. The machine that stays in one place is called Server that does not actively create connections. Instead, they passively listen for a client connect request and then provide their services. Servers are created with a constructor and the one that seeks is called Client. This distinction is important only while the client is trying to connect the Server. Once they have connected, it becomes a two-way communication process and is does not matter anymore that one happened to take the role of server and the other happened to take the role of Client. The most common client/server applications on Internet are e-mails, the web, FTP & telnet etc.

A Client program performs a service for the users by connecting with server, forwarding service requests based on user inputs and providing the service results back to the user. In most cases, the client must initiate the connection and sends a service request to the server based on user inputs. The server receives the service request performs the service and returns the results to the server. The client receives the service results and displays it to the user.

A Server program listens for incoming connections on a well-known pot associated with its service protocol. When a client initiates an incoming connection, the server accepts the connection and typically spawns a separate thread to service that client. The client sends service requests over the connection. The server performs the service and then returns the results to the client.

The client/server model is based on the idea that one computer specializing in information presentation displays the data stored and processed on a remote machine. A multi-user application is a slight variation on the typical client/server application. The only difference is that information passes from one client through the server to other clients. On a typical client/server application, information flows only from the client to the server and then back. In an ideal environment, the server side of the application handles all common processing and the client side handles user-specific processing. The figure shows the simple two tier client/server model.

A simple system can be broken into two layers: a server (where data and common processing occurs) and a client (where user-specific processing occurs). This kind of architecture is more commonly known as a two-tier architecture e.g. a simple time server is of two-tier. Businesses applications and increasingly Internet applications are generally much more complex than simple two-tier applications. These more complex applications can be involve relational databases and advanced server-side processing. Because client machines are becoming increasingly powerful, client/server development has enabled applications to move processing off the server and onto client to facilitate the use of cheaper servers. This trend has led to a problem known as the problem of the fat client i.e. a client that has absorbed an inordinate amount of the system's processing needs. The solution is three-tier client/server architecture that creates another layer of processing across the network. The three-tier design divides application into following three tasks:

  1. User Interface
  2. Data Processing or Business Rules (the processing of data going to and from clients in a way that is common to all clients)
  3. Data Storage

One of the primary advantages of three-tier architecture is that as data storage needs grow without affecting clients the way data is stored can be changed. The middle layer of system is commonly referred to as the application server can thus concentrate on centralizing business rule processing. The figure shows three-tier model:

Clients and servers establish connections and communications via sockets. Sockets are the endpoints of internet communication. Clients create client sockets and connect them to server sockets. Sockets are associated with host address and a port address. The host address is the IP address of the host where the client or server program is located. The port address is the communication port used by client or server program. The socket server is just a set-up man. It accepts new socket connections and then creates objects that interact with the real application and pass the results back to the socket-based client. The socket-based server itself never interacts with application server. Next, the server creates a socket-based client object which implements the application's client interface. The socket server also tells the new client object where to find the application object. Finally, the socket-based client object interacts with the application, passing information over the socket connection to the user on the other end.

2- Load Management and Balancing Techniques

Transparently transferring the work submitted by users to a lightly loaded member server instead of a heavily loaded member server is the goal of load balancing in the distributed system. In this paper some techniques have been implemented to achieve these goals. This system is capable of using different load balancing algorithms.

2.1- Round Robin Fashion Technique:

Assignment of jobs in a Round Robin Fashion to each of the member servers is suitable for homogeneous servers with same processing capabilities and size of jobs (data to be transferred) are almost same from each client. In such situations, there is no need of introducing processing overheads in selecting the appropriate server for redirection of data transfer request.

IP addresses of the member servers are stored in a dynamic array (vector) and addresses are fetched on turn basis from zero index position to the last index of the vector. This is a cyclic process.

Initialize pointer to zero: index = 0

Loop till Index = size of the vector - 1

Get IP address from the location pointed by Index

Increment Index by 1

If Index = size of the vector - 1, re-initialize pointer with zero

Loop end

Round Robin Fashion Algorithm

Assume there are four member servers connected to the cluster manger and the IP addresses of the servers are stored in a vector at the cluster manger. On first request, request is routed to server with IP address at index = 0 (192.168.0.1) and index is incremented. On second request, request is routed to server with IP address at index = 1 (192.168.0.2) and index is incremented. On third request, request is routed to server with IP address at index = 2 (192.168.0.3) and index is incremented. On fourth request, request is routed to server with IP address at index = 3 (192.168.0.4) and index is re-initialized to zero because it reaches the end of the vector. This is illustrated in Table - 1 below:

Table - 1

IP Address

Index number

192.168.0.1

0

192.168.0.2

1

192.168.0.3

2

192.168.0.4

3

The above cycle is repeated for the next four requests and so on. Meanwhile the size of the vector can change as more member servers can get connected or any of the connected servers may get disconnected.

2.2- Weighted Round Robin Fashion Technique:

 This scheme is suitable for heterogeneous servers (cluster-members) where processing capabilities of machines are already known and the size of jobs submitted from the clients are same because load is assigned to each member according to its capability.

The weight factors of all the member servers are calculated and stored in a vector. For each request, IP address of the server with highest weight factor is selected, weight factor is decremented by 1 and the same process is repeated for next requests. When the weight factors of all the members servers become zero, the process of calculation of weight factors is repeated. This is done after every 10 requests because meanwhile there could be change in the number of connected servers.

Weighted Round Robin Fashion Technique algorithm

IP addresses and strengths of member servers are stored in two different vectors in a manner that corresponding positions in both the vectors indicate the information about a server. A third vector is used to store the calculated weight factors of member servers. Weight factor determines the number of requests to be routed to the member server out of every 10 requests. Weight factor of certain member server is calculated using this formula: Weight factor = total strength of member servers / sum of strengths of all servers * 10 (weight factor is rounded off to a whole number after calculations).

Assume there are four member servers connected to the cluster manger and the IP addresses and strengths of the servers are stored in two vectors. IP addresses of the connected servers are stored in a vector IPVect as shown in Table - 2.

Table - 2

IP Address

Index number

192.168.0.1

0

192.168.0.2

1

192.168.0.3

2

192.168.0.4

3

Strengths of the connected servers are stored in a vector StrnVect as shown below in Table - 3.

Table - 3

Strength

Index number

8

0

10

1

4

2

2

3

Calculated weight factors of the connected servers are stored in a vector WFVect. This is calculated after every 10 requests. The results are shown in Table - 4.

Table - 4

Weight Factor

Index number

8 / 24 *10 = 3

0

10 / 24 *10 = 4

1

4 / 24 * 10 = 2

2

2 / 24 * 10 = 1

3

According to this weight factor out of 10 requests are to be routed to member server 192.168.0.1, four to 192.168.0.2, two to 192.168.0.3 and 1 to 192.168.0.4.

For the first request, IP address of server with maximum weight factor (index = 1, IP = 192.168.0.2) is chosen and weight factor of server is decremented. The contents of the weight factor vector after first request is shown below in Table - 5:

Table - 5

Weight Factor

Index number

3

0

3

1

2

2

1

3

For the second request, IP address of the server with maximum weight factor (at index = 0, IP = 192.168.0.1) is chosen and weight factor of the server is decremented. Now the WFVect is shown in Table - 6:

Table - 6

Weight Factor

Index number

2

0

3

1

2

2

1

3

For the third request, IP address of server with maximum weight factor (index = 1, IP = 192.168.0.2) is chosen and weight factor of server is decremented. The contents of the weight factor vector after first request is shown below in Table - 7:

Table - 7

Weight Factor

Index number

2

0

2

1

2

2

1

3

For the fourth request, IP address of the server with maximum weight factor (at index = 0, IP = 192.168.0.1) is chosen and weight factor of the server is decremented. Now the WFVect is shown in Table - 8:

Table - 8

Weight Factor

Index number

1

0

2

1

2

2

1

3

For the fifth request, IP address of server with maximum weight factor (index = 1, IP = 192.168.0.2) is chosen and weight factor of server is decremented. The contents of the weight factor vector after first request is shown below in Table - 9:

Table - 9

Weight Factor

Index number

1

0

1

1

2

2

1

3

For the sixth request, IP address of server with maximum weight factor (index = 2, IP = 192.168.0.3) is chosen and weight factor of server is decremented. The contents of the weight factor vector after first request is shown below in Table - 10:

Table - 10

Weight Factor

Index number

1

0

1

1

1

2

1

3

For the next four requests, IP addresses at indexes 0,1,2,3 are chosen respectively and the contents of weight factor vector after first request is illustrated in Table - 11:

Table - 11

Weight Factor

Index number

0

0

0

1

0

2

0

3

Now weight factors are recalculated and the same cycle described above is repeated.

2.3- Load Assignment using EquiLoad:

Load Balancing using EquiLoad ensures that each member server can take equal load from the LoadBalancer. Each incoming request determines the identity of the back-end server that will eventually satisfy it. The front-end dispatcher does not directly consider any information regarding the load of the individual back-end servers. Instead an integral part of EquiLoad is distributed workload characterization activity performed in the background by each back-end server. By continuously monitoring all incoming workload these servers can then periodically renegotiate their agreement about the size of requests that will be allocated to them. If using EquiLoad then each member server can get equal size of request from the main server. Assuming that the number of back-end servers is, EquiLoad policy requires partitioning the possible request sizes into N intervals, [(s0 0; s1), (s1; s2),......, (sN1; sN1)], so that server I will be responsible for satisfying request of size between si1 and si. In practice the size corresponding to an incoming request might not be available to the front-end dispatcher but this problem can be solved using a two-stage allocation policy. First the dispatcher assigns each incoming request very quickly to one of N back-end servers using simple policy such as Round- Robin which is even easier to implement. When server I receives a request from dispatcher it looks up to size s and if si1 s<si it will put the request in its queue otherwise it will reallocates it to the server j satisfying sj1 s<sj (any server i receives from another server is instead en-queued immediately since it is guaranteed to be in the correct size range). Letting the back-end servers reallocate requests among themselves is very sensible, since the size of information is certainly available to them.

Assume there are four member servers connected to the cluster manger and IP addresses of the servers are stored in a vector. This is explained by using Round Robin fashion of load assignment. This is illustrated in Table - 12:

Table - 12

IP Address

Index number

192.168.0.1

0

192.168.0.2

1

192.168.0.3

2

192.168.0.4

3

There are ten requests arrive from client. Now main server will use EquiLoad policy and then assign a equal number of request to each member server. First request will be routed to server with IP address at index=0 (192.168.0.1) and IP address is returned to the client. Second request is routed to sever with IP address at index=1 (192.168.0.2) and IP address is return to the client. Third request will be routed to server with IP address at index=2 (192.168.0.3). Now this process will continue and then each member server can find equal request size from main server.

2.4- SITA-E Technique:

 Size Interval Task Assignment with Equal Load (SITA-E). SITA-E policy fits job size ranges (intervals) to bounded-Pareto distributions and then equalizes the expected work i.e. given n back-end servers then n size ranges are determined off-line such that each range contains approximately the same amount of work.

The SITA-E algorithm is based on the following observation: if task size variability were very small (c2 < 1) FCFS would outperform PS for a single queue. Therefore, SITA-E's goal is to reduce the variability of tasks arriving at each host. It achieves this by partitioning tasks among hosts, according to their sizes. Surprisingly this method is even able to compensate for high variability of a heavy-tailed distribution. SITA-E has additional advantage that it has a static policy and therefore has a simple implementation.

In this policy when a request arrives its size will be determined and only specific member server is assigned to the client. SITA-E relies on the assumption that the distribution of the size of incoming requests is known and further this distribution has mean M. In SITA-E each host only accepts tasks whose size falls within a specified size interval where this size range is chosen such that each host receives equal work in expectation. Specially let F(x) = PfX denote the cumulative distribution function of request sizes and F(x) = PfX = xg the corresponding density function. Let k denote the smallest possible request size, p denote the largest possible request size and h be the number of hosts.

Assume there are four member servers connected to the cluster manager and the IP addresses of the servers are stored in a vector. This is explained by using Round Robin fashion of load assignment, illustrated in Table - 13.

Table - 13

IP Address

Index number

192.168.0.1

0

192.168.0.2

1

192.168.0.3

2

192.168.0.4

3

Strengths of the connected servers are stored in a vector named StrnVect shown in Table - 14.

Table - 14

Strength

Index number

8

0

10

1

4

2

2

3

Calculated weight factors of the connected servers are stored WFVect. This is calculated after every 10 requests. This is illustrated in Table - 15:

Table - 15

Weight Factor

Index number

8 / 24 *10 = 3

0

10 / 24 *10 = 4

1

4 / 24 * 10 = 2

2

2 / 24 * 10 = 1

3

In this policy each member server is assigned a strength so if the job size is less then 1024 kb it will be routed to index =3 and IP=192.168.0.4.If the higher size job which is more then 2 or 3 Mb then higher strength server is assigned to the client request i.e. index=1 and IP=192.168.0.2. Each server can utilize its power on the large processing request.

3- Methodology

The Islamia University of Bahawalpur has five thousand users that utilize the facilities of networking and Internet for their research and academic purposes. As more users are switching over to Internet and networking day by day, the problem of congestion and slow user-request processing speeds due to heavy loads of users and traffic occurs in network. So there is indeed a need of some type of techniques that will make the processing of server faster and easier for the users. The load management in client/server system is capable of transferring the load of request from main server to the member servers or clients.

A cluster approach is used, a cluster server system consist of four independent servers that works together. The servers have been designed with the multithreading capabilities which will process the data transfer request from the multiple clients simultaneously. Socket communication has been used for the communication between client and server. The Load Management System is equipped with the algorithms that can be used according to the requirements. The technique of SMS Alerts and Notification for the system monitoring has been implemented so, whenever a member server is disconnected or its connection is drooped with the main server, SMS is send to the Network Admin cell number which will tell him that member server is disconnected.

The implemented architecture with multithreading capabilities of the server is shown in the figure below:

Server Socket

socket

socket

socket

Initially, server opens Server Socket and dedicates a thread to listen to requests. Then the client initiates request for connection to server and consequently, server opens dedicated socket for the client. Communication Handler object starts a separate dedicated thread at server side for the client and Client starts its thread to communicate with the thread at server side.

The figure below shows the geographical deployment of the servers connected with the main server room using the Internet and other networking facilities:

Three tier architecture of client/server model is used and further this creates a cluster. The clients are connected with the servers in each blocks i.e. faculty of science, arts, education, administration, hostels and staff colony for utilizing the Internet and networking. The system is designed keeping in view the advantages like easier implementation and easier modification in the system. The system consists of three software components:

  • Cluster Manager Module: running on a machine with powerful hardware and is responsible for managing the activities of member servers in the cluster like redirection of load to the appropriate member server.
  • Member Server Module: software component the instance running on multiple machines and this component actually serves the data transfer request after being redirected from cluster manager.
  • Client Module: software component which requests the data transfer.

The classes for the overall system are described below:

Server Info:

This is basically a data holder class. Its object is responsible for holing different parameters of a single cluster member. Cluster Manager keeps as many objects of this class as connected cluster members. Each object represents the following parameters of the cluster member server:

  • IP address
  • Associated socket object for communication
  • TCP Port assigned to the service

Server:

This class represents the functionality of the cluster manager. The object of this class is responsible for communicating with the cluster member servers and with the clients. This object listens to the requests for connection from the member servers and from the clients. On each request, it opens dedicated sockets and runs dedicated thread for each of the connected member server with the help of object of Server Thread class.

Server Thread:

This class provides multithreading capability to the object of Server class. The object of this class is responsible to communicate with the member server providing a dedicated thread to each of the server connected.

Server Helper:

This class is responsible for the actual data transfer from the client to the server side. The object of this class initiates contact with the client and serve the data transfer request.

Server Helper Thread:

The object of ServerHelperThread provides the multithreading capability to each of the member server. This object facilitates the simultaneous data transfer processes to the multiple clients connected to a single member server. For each client, one object of this class is created which serves the data transfer request of that particular client.

Load shifter:

This class provides the algorithms for shifting of load among member servers in the cluster. This object has capability to provide different load balancing schemes according to the requirement of the application.

Client:

The object of this class resides at the client side and this class is used with in the application running at client side. This object communicates with the Server object for request redirection and with the ServerHelper object for the data transfer.

Helper CPU Info Class:

This class is use basically to retrieve the information of Member Server CPU and RAM info; basic reason is that we use automated assigning the Strength to the Member Server. it is also to mention to using that class is that these value are store in the registry of windows so to run that class firstly permission on registry is available on System. it is also necessary to mention that use this class only for windows environment because there is no registry system for storing the hardware or other information in Linux environments. Strength is assign to Member Server on the basis of that CPU and RAM info, that criteria is done with the requirements of Network admin that which type of System have batter equipment and he need more work from it. This scheduling process may be change on the need of Organization.

Member GUI class:

This class is used to provide the GUI Interface to a user for starting the Member Server .basic elements which display on that interface are, to use a Text Field to enter the Main Server IP address, to connect with it another text Field is use to enter the port # by which that Member Server Serve the Client .Start and Close Button are use to Start and Stop the Member Server. Against the Start button we call the class Member Server Constructor which start the Member Server class and connect to the Main Server using the IP address which we enter in the Text Field. When a Member Server Started, all the information which require that what's happening we use Text Area. Whenever a client Connected or any Exception occur ,that provide all information from starting to ending of the file storing on disk all the information of member Server class is write in Member GUI Text Area not using System.out.print which shows result in console.

Server GUI:

That class is used to provide the GUI Interface to the to a user for starting the Main Server with GUI interface it is easy to handle all the thing with starting the Main Server class. Basic elements which display on that interface are, to use a choice which provide the information that which type of environment you have like homogeneous or heterogeneous ,.next use the List which provide the option of selecting the policy or which are use in the Loadbalancer class, Start button also use the class Main Sever constructor which start the server. Finally use a Text Area which displays all the result from starting of Server to its ends every type of Info is display here except the console.

The following sequence diagram shows an interaction between ServerHelper and Server objects:

4- Result and Discussion

The problem of congestion and slow user-request processing speeds can be solved by using a single large powerful server. This solution soon fails because of the enormous network traffic. The second solution is replicating the server information over many geographically separated independent servers called “mirrored-server” architecture. This will solve the problem of congestion but with a number of disadvantages including huge loss of network and computer resources and lack of control on the request distribution by server system. A promising and efficient approach is the development of distributed architecture where the user-requests can be routed among several server nodes. This solution of distributed servers being managed under a single system provides us with improved throughput performance. Thus a server system with ease of manageability, greater availability and scalability of the servers is attained.

This will increase user satisfaction because the user get faster, more consistent response time, directing traffic to the least loaded and most responsive servers and also prevent servers from getting overloaded. Preferred users and mission critical application traffic can be given higher priority by the LoadBalancer. Servers and the network resources can be allocated for high priority users and applications with the bandwidth management feature. Mission critical application and user accessing these applications will get consistently good performance.

The Round Robin technique is simple, cheap and very predictable. This approach uses the cyclic process. All the member servers using this technique is suitable for homogeneous environment with same processing capabilities. The weakness of this approach is there is some chance of convoying, i.e. when one server is significantly slower than the others. It also has no knowledge about load of the back-end server.

The EquiLoad approach is the best for load balancing with its nature. It assigns the equal load to each member server so that all the member servers will have the equal size of jobs. It is the best for homogeneous environment.

Weighted Round Robin technique is easy to implement and it has the awareness of the different capabilities of the servers. It is much suitable for the heterogeneous environment. The drawback of this technique is that the weight is manually assigned by the administrator and also the ungraceful degradation in case of overload.

SITA-E technique is easy to implement. It has an additional advantage that it has a static policy and therefore has a simple implementation. In this policy when a request arrives its size will be determined and only specific member server is assigned to the client. It is much suitable for the heterogeneous environment. The drawback of this technique is that the weight is manually assigned by the administrator and also the ungraceful degradation in case of overload.

5- Conclusion

There are advantages of each technique on the other hand limitations are also there. All the techniques are deployed and found they are equally good for tackling the problem of congestion and overloading of the Main Server. In this scenario both homogeneous and heterogeneous environments are used. However for the homogeneous environment EquiLoad provides the best results so it suits in this environment. In heterogeneous environment the best technique is Weighted Round Robin where processing capabilities of machines are already known and the size of jobs submitted from the clients are the same because load is assigned to each member according to its capability.

6- References

ACM (Website), Available at: http://portal.acm.org/, Accessed on October 15, 2008.

Web site: http://revistaminimi.com/information-technology-explored-as-a-corporate-asset.html

Kennedy Clark, Kevin Hamilton (1999). CISCO LAN SWITCHING by CISCO PRESS.

by ISBN 1-57870-094-9.

R. Buck-Emden, J. Galimow (30 Aug 1996). Client/Server Technology SAP R/3 System (Hardcover) Publisher: Addison Wesley; Subsequent edition ISBN-13: 978-0201403503.

Elbert B, Martyna B (1994). Client/Server Computing Architecture, Applications, and Distributed Systems Management, Publishers, Boston * London, ISBN 0-89006-691-4.

H Richard, Thayer H. (2004), Software Engineering Project Management. Second Edition,

IEEE Computer Society Order Number BP08000, ISBN 9812-53-095-9.

N Ashok, Kamthane N. (2004), Object-Oriented Programming, ISBN 81-7808-772-3.

Neosmart.net (Website), Available at: http://neosmart.net/blog/2008/weighted-round-robin-dns-solutions/, Accessed on November 22, 2008.

Pressman Roger S. (2008), Software Engineering: A Practitioner's Approach, 5th Edition, McGraw-Hill, ISBN 0073655783.

Szyperski C. (2004), Component Software Beyond Object-Oriented Programming. 2nd Edition, ISBN 81-297-0400-5.

Wikipedia (Encyclopedia), Network_Load_Balancing_Services, Available at: http://en.wikipedia.org/wiki/Network_Load_Balancing_Services, Accessed on November 20, 2008.

Wikipedia (Encyclopedia), Weighted_round_robin, Available at: http://en.wikipedia.org/wiki/ Weighted_round_robin, Accessed on November 20, 2008.

Web site: http://www.webbasedprogramming.com/Java-1.1-Unleashed/htm/ch24.htm

Web site:http://www.nakov.com/inetjava/lectures/part-1-sockets/InetJava-1.1-Networking-Basics.html

Web site: http://docs.rinet.ru/JSol/ch37.htm

Web site: http://www.control.auc.dk/~henrik/undervisning/Markov2/riska/www.cs.wm.edu/~riska/main/node62.html

Web site: http://historical.ncstrl.org/tr/fulltext/tr/mitai/TR-757.txt

Web site: http://www.webbasedprogramming.com/JAVA-Developers-Guide/ch26.htm

Web site: http://historical.ncstrl.org/tr/fulltext/tr/ustuttgart_fi/DIP-1986.txt

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.