Client Server Architecture Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The client/server architecture that I propose for the company is a three-tier client/server. With this model it has two levels or tiers: a client and a server tier. This is a very common model seen in many different companies. The three-tier model excels in flexibility, growth, user independence, and availability. However, the three-tier model lacks in security, manageability and control. Three-tier applications are fairly easy to write and are very easy to update. Usually an update will involve updating the code at the web server. Users are very familiar using web browsers so users will naturally be able to easily use the application and will be comfortable with it.

Enterprise applications are slowly beginning to migrate to the three-tier model. This is mainly due to the users already being familiar with web applications and the ease of update for the application. The n-tier model also allows three-tier applications to excel in reliability. There are still many two-tier applications being written today, but nearly all applications are using the

client/server model. Client/Server has become the architecture of choice for most modern applications.

The Web and Web browsers are good examples of client-server computing. Client/Server applications come in a variety of flavors including screen scrape type applications, visual basic win32 programs, and web applications. Web applications are a particularly good example of the client server environment because they are a great model of the three-tier architecture. The web browser interprets html code into the user interface. The web server does all of the processing of business logic and in most cases, there is a separate database server that is the backend data storage for the web application. Web applications are client/server computing at its best. Web applications provide users with a familiar interface. Nearly all corporate users are comfortable working on the internet. In fact, most people today interact with some sort of web application on a daily basis - whether it is to check their email or to order a book from Amazon. Because users are already familiar with the web and web browsers, enterprises that choose to deploy their applications via the web do not have to worry about training their users. Additionally, application design can be implemented consistently across all the company applications using html standards. These standards further help the user to develop a familiarity with a company's applications.

Web applications also offer enterprises a large variety of languages and platforms for development. This is important so that corporations are not tied to a specific operating system, as with visual basic win32 programs. Realistically, the Windows OS is going to be the corporate environment for at least the next ten years or longer, but it is still an advantage for corporations to house the applications on servers rather than on users' desktops. Web applications also do not care what type of database backend that they use. Web applications allow corporations to choose their databases, servers, and users desktops based on security, reliability, and price. The company's applications are not tied to any particular platform or database. Using web servers for applications is a huge advantage over the standard deployment of applications to users' desktops. Applications on web servers can easily be updated because the source only needs to be updated at the server level, not at each user's computer. This makes upgrading applications fast and easy, which in turn might result in more updates and a better experience for the end user.

Web applications are the most flexible, available, and user friendly of the client/server applications. Web applications provide huge advantages for the corporate environment and will only become increasingly better as they become more sophisticated.

A server cluster is a group of independent computer systems, known as nodes, working together as a single system to ensure that mission-critical applications and resources remain available to clients. There are three primary benefits to using server clusters. Server clusters are highly available, scalable, and easy to manage.

Highly available: Server clusters provide a highly available system for deploying applications. You can use server clusters to protect against failures of hardware, operating systems, device drivers, or applications. If one of the nodes in a cluster is unavailable as a result of failure or maintenance, another node immediately begins providing service (a process known as failover). Server clusters also allow you to upgrade the operating system and application software with minimal downtime.

The level of availability required by companies varies, but it is not uncommon to require 99.99% uptime, which equates to approximately one hour of unplanned downtime per year.

Scalable: Server clusters provide scalability for important resources and applications. When the overall load of a cluster exceeds its capabilities, you can incrementally add additional nodes to the cluster.

Using an active/passive cluster design, user levels of 3000 to 5000 are possible. Mailbox size will depend on the size of the storage resource. A 200Mb mailbox limit is typical.

Easy to manage: In a server cluster, you can quickly inspect the status of all cluster resources and move workloads around onto different nodes. Because you can move processing to alternate nodes, you can perform rolling upgrades on the servers. In a rolling upgrade, a server cluster continues to provide service while software is being upgraded on each node until all nodes have been upgraded.

Server consolidation is another goal required by companies. Clustering is used in a Microsoft environment for file and print servers, and cluster aware applications, such as SQL server and Exchange server. Clusters directly help reduce single points of failure in a server design. Clusters can also help reduce planned downtime. For instance, if it is necessary to take a server down to apply service packs or bios upgrades, one node can be worked on while the others still provide the clustered service to the users.

Each node has one or more physical disks used for storing the operating system, the swap file, non-shared applications and so on that only they have access to. At the same time, every node is attached to one or more shared cluster storage devices (such as a Storage Area Network) that contain the cluster quorum drive and Exchange shared resources (such as log files, public stores and message stores). Clustering allows users and administrators to access and manage the nodes as a single system rather than as separate computers.

In an active/passive cluster configuration, only one node of the cluster is active at any given time. All resource groups reside on that node. If that computer fails or is taken offline, the other node will gracefully take over all resource groups. The problem with this configuration is that one node is always idle.

In an active/active cluster configuration, each node of the cluster will own a group so all nodes are working all the time. With this approach, no nodes are idle.

Clustering can be extremely valuable for larger companies or indeed in any company where file servers or applications like Exchange are considered mission-critical. In other words there will be a severe impact to the business if the service became unavailable for any length of time. This well designed clustered solution will provide high server availability and of course, low downtime.

In-depth security, or defense in depth, is the principle of using a layered approach to network security to provide even better protection for your computer or network.

No matter how good any single network security application is, there is someone out there smarter than the people who designed it with more time on his hands than scruples who will eventually get past it. It is for this reason that common security practice suggests multiple lines of defense, or in-depth security.

In-depth security uses layers of different types of protection from different vendors to provide substantially better protection. A hacker may develop an exploit for a vulnerability that enables them to bypass or circumvent certain types of defenses, or they may learn the intricacies or techniques of a particular vendor, allowing them to effectively rendering that type of defense useless.

By establishing a layered security you will help to keep out all but the cleverest and most dedicated hackers. As a baseline I suggest implementing the following computer and network security products:

Firewall: Basically, a firewall is a protective barrier between your computer, or internal network, and the outside world. Traffic into and out of the firewall is blocked or restricted as you choose. By blocking all unnecessary traffic and restricting other traffic to those protocols or individuals that need it you can greatly improve the security of your internal network.

Antivirus: Antivirus software is a type of application you install to protect your system from viruses, worms and other malicious code. Most antivirus programs will monitor traffic while you surf the Web, scan incoming email and file attachments and periodically check all local files for the existence of any known malicious code.

Intrusion Detection System (IDS): An IDS (Intrusion Detection System) is a device or application used to inspect all network traffic and alert the user or administrator when there has been unauthorized attempts or access. The two primary methods of monitoring are signature-based and anomaly-based. Depending on the device or application used, the IDS can either simply alert the user or administrator or it could be set up to block specific traffic or automatically respond in some way.