Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Redundancy in Client/server Networks

Paper Type: Free Essay Subject: Information Systems
Wordcount: 2809 words Published: 8th Feb 2020

Reference this

Introduction.

 A server is a computer program that offer services to other network hosts. The term is also used to refer to the physical machines which run server programs in data centers or in other places if it’s a personal server. They can be dedicated to a specific duty of handling client requests in a client/server architecture or be used for multiple purposes. Conversely, a client is a workstation that is capable of sending request to a server and obtaining information or applications as a response from it. Ideally, clients can be simple applications or a whole system which receives responses from the server. There are different types of servers which are classified depending on the purpose. For instance, webservers servers display HTML pages or deliver other types of data according to client requests.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

 A client-server network often provides a model for communication in which multiple clients share services from a common server program (Adler, R. M, 14-22.). This is evident with the webserver i.e. the computing device to which web browser is a client. While client/server concept may be used for single-computer solutions, the approach is mostly applied in networking environments. In networking scenario, a client often initiates communication with a server in the same network. Usually, this connection is terminated once the request is fulfilled.

 In the current business world, network connectivity is a core aspect to enable access to corporate resources. This owes to the fact that most business operations prefer to run all day and even at the off hours. To maintain continuity in business, a solid business redundancy strategy should be implemented. Therefore, a company should ensure that servers’ availability is upheld. The network architecture should be set to maintain high level of availability by finding remedies to breakages and unforeseen events. Therefore, it is necessary to plan and build the proper network configuration in order to always have both the company workforce and its customers in touch with one another and with system. Besides, servers can be clustered and other applications’ and web properties’ load get balanced.

Description

The main aim of a redundant network is to provide continuity of service delivery when the primary network is down. There are different types of redundancy which are dependent on the sources of failures in the network. Some of the redundancies in a client/server network are: power redundancy, geographical, hardware, and pathway redundancies.

Power redundancy is the backup system for the power supply used in case the primary sources of power go offline. The redundant power supply may include generators, an auxiliary power supply line, and battery backups. Geographical redundancy is achieved by installing redundant systems in different locations to prevent the failure of network system as a result of occurrence of natural disasters. Hardware backup system is a redundancy mechanism is applied by the installation of separate servers to mitigate on the effect of failure as a result of physical damages of the primary servers (Cristian, F, 56-78.). Lastly, pathway redundancy is achieved by the installation of different network nodes. A network may have both the wired and wireless connections. In such a system, if one medium gets damaged, the other medium takes over.

Some of the protocols that are useful in the client/server service delivery are the Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP). The first one is mainly applied in mapping domain names to Internet Protocol (IP) addresses during network access initialization. The DHCP is mostly used to allocate IP addresses to the end users. DNS databases are usually used both internally and externally. The functionality of the internal DNS server often relies on the availability and operation of the DNS servers located in the demilitarized zones (DMZ). Communication between the external world and an organization is facilitated by an external DNS. DHCP is frequently used internally to allocate IP addresses to internal hosts.

The figure below presents a hardware redundancy system for a DNS server network. In this network, the DNS server system of the organization is composed of primary and secondary servers. In the figure, the primary server has a direct link to the zone data while the secondary servers are the slaves to the master, the primary server (Gadir, et al.,2005). The secondary servers are configured with different IP addresses to increase the network reliability rate.

Figure 1: Shows the configuration of primary and secondary servers.

For high DNS availability, virtual internet protocol address (VIPA) is shared across multiple DNS servers by the router redundancy protocol (Bell, J. A., Britton, E. G.,1999). With this networking, when one node fails, another node will continue providing services. Apart from providing redundancy, the network allows easy expansion of itself without causing any delays in the service delivery. The disadvantage of the system in the figure below is that when the entire system of the switch fails, the network fails as well. The operation of the network is summarized in the figure below.

Figure 2: Shows a network for the DNS availability.

Another technique that allows the availability of DNS services is the DNS Anycast. This system allows multiple and geographically diverse DNS servers to make advertisement of the same IP address to clients. The client’s query is delivered to the nearest DNS server and if one of the servers fails, the query will be routed to the next nearest one.

DHCP network is composed of DHCP client, server, and relay agent. The DHCP client is a host device that requests IP address and other resources from the DHCP server. Client and server may reside on the same network or in different ones; in last case, DHCP agent is what ensures the information gets delivered between them. For a consistent delivery of DHCP services to all the hosts in a network, a redundant network of the DHCP system should be provided. Essentially, a network should have multiple DHCP servers and relay agents to prevent disruption of service delivery in case of the primary devices failure.

Identification and Discussion

While connection is essential to enable communication between the server and the client, connection failure is a frequent occurrence. Such failures can lead to server failure. Other sources of server failure may include power outage or loss of Internet connection. In a remedy to the unfortunate occurrence, server redundancy comes handy. Server redundancy is achieved by designing backup servers to enable the continuity of the client server applications when the primary server is unavailable. For example, VTScada is a software used to share information between workstations for a client/server architecture which works as a server. The software is designed to be capable of designating other backup servers if the primary server fails. Importantly, the VTScada software also allows the system administrator to assign services to primary and backup servers. Such services which optional software configuration may be like alarms, logging, or modems.

 In a client/server networking environment, redundancy can be used to describe several computer resources. For instance, network devices such as nodes, operating systems, servers, and routers can often be installed to back up the primary resources. This back up for the primary resources is vital in the case when a primary resource goes down due mechanical failure or overloading during peak season of operation.

DNS and DHCP server failure may be rooted from disconnection of the physical media linking routers and the servers or due to electrical interference. The DNS server may also fail to operate because of unbalanced load, especially when configured in a round robin algorithm. In this case, the inbound traffic will still be directed to all servers in the network even if one of those servers is offline. As a result, the operating servers will experience intermittent connection problems to the load balanced resources.

Another source of server failure is the occurrence of defects in its forwarders. This is a case experienced by a DNS server that uses forwarders to link to DNS servers of the Internet Service Providers (ISP) (Zhao et al, 2001).

If the ISP’s DNS server is offline, the organization’s DNS server will not communicate with the ISP’s DNS server as the Internet name resolution will have stopped functioning as a result of expiry of the resolver cache.

 When an organization uses DHCP servers to automatically configure the client workstations with TCP/IP, a failure in the server may lead to disruption of service delivery. Client workstations will not be configured with IP addresses, thus, accessing information will not be possible. DHCP and DNS failures may also result in delayed or intermittent access to services. An application may run for hours and only fail to allocate an IP lease or DNS results in time out of the access period.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

 The failures of these systems always result in financial lost to both the organization and outsiders transacting with the organization. In defining risks posted by the failure of DNS and DHCP servers, a priority list of the services that are most critical to an organization’s daily business operation are made. The cost that will be incurred by the costumers and outsiders in case of failure in operation of the servers may be one of the determining factors of the risks. The cost that an organization will incur suppose their servers cannot access and use public Internet may also be assessed.

 The results of failures of the DNS and DHCP servers in a client/server network can be mitigated by deploying a back up system in the network. The redundant network system should include both hardware and software resources. The best practice to increase the reliability of the network by installing redundant hardware components of the network is making the devices not to share the network reliability issues (Benini et al.,p. 74). For instance, if three servers share the same ethernet switch or share the same power supply plug, the overall reliability will be lower as the failure rate is correlated to a common cause.

Installing servers in different locations using separate routers and leased lines, abolishing network dependencies and elimination of common physical medium of connection are some of the best practices in installation of redundancy network. To avoid failures of a DHCP server, multiple DHCP servers with numerous private IP addresses can be applied for a separate range of address in the same subnet. This will ensure that back up is implemented at the cost of many address space usage.

A standardized DHCP fail-over has been designed by the Internet Engineering Task Force (IETF) to ensure sharing of the public addresses is coordinated. When primary server of DHCP fail-over relays a new Internet protocol and consequently crushes, secondary DHCP will take charge. The secondary one will try to assign the already allocated IP address by the primary server to another host.

When the first client that received the IP address is offline, secondary server would reallocate IP address to a different client even though there will be an ICMP echo. This is possible even when the first client is still within the period of time allocated. The fail-over protocol allows secondary servers to reallocate addresses in the disjoint address pool to new hosts devices.

The fail-over protocol can control utilization of maximum client lead time (MCLT) and duration of time leased. For instance, consider a situation where a primary server can extend the duration of time leased to a client then crashes before providing an update to secondary server. Secondary server responds by assuming the expiry of lease time allocated to the first client and reallocate IP address to another client.

Software dependency may lead to network inaccessibility in case the software the system is dependent on crashes or malfunctions. A redundancy in the software installation in client/server network should be implemented to eliminate software dependency. An organization may run servers on different operating systems to prevent the risks faced when operating servers on one operating system. Different DNS servers may be implemented to provide back-up to the network.

In the design of a redundant client/server for the DNS service, the design should, depending on the size of the organization, include at least two internal DNS servers with servers being external DNS servers. This is the split DNS design method for redundant network. The internal server contains DNS entries from internal structures of the organization while the external servers only contain the entries from the external environment of the organization. Besides providing redundancy to the network, this method of design also provides protection to the network as it discourages hackers to penetrate the network.

Apart from the slit design of a redundant DNS, a design which involves one server as a primary server and a multiple slave server. The secondary server should also contain DNS databases; they should be identical. In this arrangement, information is disseminated from primary server to secondary servers. Sometimes the DNS server may fail because the load on them is automatically split unevenly. Load balancing systems are the ones to solve this issue, as they distribute the load evenly between the DNS servers, thus improving their redundancy.

Conversely, the design and implementation of a DHCP redundant network takes the split scopes approach. In this approach, majority of the IP address allocation is done by the first DHCP server and the remaining about 20% of the addresses is managed by other servers. In case of a failure of the first DHCP server, the IP addresses are reallocated to the network devices by other servers in operation.

Conclusion

Redundancy in networking is vital in solving unexpected risks in the network due to failure of the primary network devices. DNS, DHCP and domains are as important as other components of a network. Design of any network should have provisions for redundant DNS, domain and DHCP client/server network to prevent unforeseen risks as a result of a failure. If DHCP or DNS fails, then the organization will be out of business as the server will be undergoing the restore process.

In the design and implementation, both hardware and software redundancy should be considered. Various software should be integrated in the DNS and DHCP architecture to provide for easier shifting of operation in case of failure of the primary software. Hardware redundancy should include cables, the servers, and other relay agents. If possible, geographical redundancy should be implemented to minimize on the operation costs when servers in the same location fails because of occurrence of a natural disaster.

References

  • Adler, R. M. (1995). Distributed coordination models for client/server computing. Computer, 28(4), 14-22.
  • Cristian, F. (1991). Understanding fault-tolerant distributed systems. Communications of the ACM, 34(2), 56-78.
  • Gadir, O. M., Subbanna, K., Vayyala, A. R., Shanmugam, H., Bodas, A. P., Tripathy, T. K., … & Rao, K. H. (2005). U.S. Patent No. 6,944,785. Washington, DC: U.S. Patent and Trademark Office.
  • Bell, J. A., & Britton, E. G. (1999). U.S. Patent No. 5,917,997. Washington, DC: U.S. Patent and Trademark Office.
  • Zhao, B. Y., Kubiatowicz, J., & Joseph, A. D. (2001). Tapestry: An infrastructure for fault-tolerant wide-area location and routing.
  • Benini, L., & De Micheli, G. (2002). Networks on chips: A new SoC paradigm. computer, 35(1), 70-78.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: