This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The term client-server refers to a popular model for computer networking that utilizes client and server devices each designed for specific purposes. The client-server model can be used on the Internet as well as local area networks (LANs). Examples of client-server systems on the Internet include Web browsers and Web servers, FTP clients and servers, and DNS.
In this article the client server model that will allow client to exchange data with a server, it's become a chatting programming. This chatting program built by used Delphi language with TCP/IP protocol and wireless connection to use it on the local area networks, to help many people for make local safety conversation.
Where did client/server come from? To answer this question, we need to go back to the beginnings of computer networks. Even the earliest computers were network based, in that the user sat at a terminal in the corner of the room, and the computer filled the other four floors of the building. In effect, the control of the machine was from a terminal-a remote keyboard, card reader, printer, or screen .
2.1.1 Dumb Terminal Networks
If this sounds a little far-fetched, consider the modern PC. The DOS interpreter still understands the kind of commands that were used then. The advent of graphical interfaces has blurred this distinction somewhat, but in effect you're still sitting at a machine where the keyboard and screen form a 'dumb terminal', which talks to the rest of the system behind the scenes. This is a similar scenario to the first kinds of distributed computing. All you needed to do was allow the central processing unit to support several sets of screens and keyboards or terminals and scatter them around the building.
The traditional central processing system model 'Dumb terminals' carried no processing power of their own, other than that required to collect keystrokes and send them back to the main processing unit, and display information coming from it.
2.1.2 The Advantages of Dumb Terminal Networks
The traditional dumb terminal network is the administrator's dream come true. The entire configuration and (most important of all) the power of the system, is contained inside that air-conditioned room. As long as the physical network connections are intact, and the simple terminals aren't belching smoke, it all works. Central control means that the entire network can be managed, monitored, and maintained from one place. It also means that network traffic is minimized. All that has to travel the wires are the instructions coming from the terminals, and the results being sent back.
2.1.3 Internet as a Dumb Terminal Network
The concept of a dumb terminal network almost exactly matches way in which we use the 'Net. Although the machine on our desk has huge reserves of processing power, and (in theory anyway) plenty of local storage space, all we are doing with a browser is acting as a dumb terminal.
We send a request off to the Web server, and it sends back the processed information as a static page that the browser just has to display. Up until the advent of client-side technologies like Java, ActiveX and scripting languages, the browser was literally a dumb terminal. The physical structure of the Internet also matches this model very well. Bandwidth is at a premium, so the minimization of network traffic is a major bonus. And the remote geographical nature of the terminals makes visits by the network technician impossible.
2.1.4 PCs on the Network
Of course, with the arrival of the personal computer, users wanted more than just a dumb terminal on their desk. Seeing what was possible with their own 'real' computer meant that static information coming from a server, over which they had little or no control, was obviously severely limiting when the technology beckoned with ever-increasing capabilities. And soon, PCs were strung together to form local area networks. Users could share files and resources, like printers, between the machines. Finally, the PC is a rather more complex beast than a dumb terminal. Configuration and maintenance now involve the technician rushing around the building, installing and upgrading each machine separately.
Now, the junior accountant keeps the customer database on his hard disk, and remembers to back it up daily. However, the constant accesses from all the other users are going to limit the responsiveness of his machine. It could well slow to a crawl when the sales desk is busy. The solution is to dedicate one machine on the network as a central file server, provide it with oodles of disk space, and put all the files there. It becomes a lot easier to do proper backing up, and duplication of the data is prevented.
While this network model solves the file duplication problem, and network management, it does little to solve the concerns of configuring and maintaining the rest of the machines on the network. It also, unfortunately, adds another problem. Every file has to travel across the network from the server to the end user, then back again to be saved. If the junior accountant needs to update the customer database, the complete file it has to be fetched from the server, processed, and saved back there again. Network bandwidth requirements go through the roof.
2.1.5 Both Ends of the Network
In recent years, technologies have been developed which were aimed solely at solving the mixture of problems we've seen so far in the various networking models. An example of this is Microsoft Access, which can work either as a stand-alone application, or in a kind of client/server mode.
When we create a new database on our hard disk, Access works as a single-user local processing application. All the data storage and manipulation is done on our machine. However, we can use Access as a 'front end' to a set of database tables, by linking them to it. These tables can then be placed on another part of the network, say the central file server. Now, everyone can have an Access front-end (and not necessarily all the same one), while working with a single set of data.
But this alone isn't client/server computing, and it does little to limit bandwidth requirements. What completes the picture is that the central server can carry its own copy of the database engine, minus the 'front end'. Now, instead of the client machines fetching a whole table of data across the network each time, they can issue an instruction to the central database engine, which extracts the results they need from the tables and sends just that back across the network.
So client/server, at least in theory, gives us the best of all the worlds. We get minimized network traffic, central data storage and easier systems management because the 'important' processing can be done at the server end if required. The only real down side, and the one that is currently the biggest cause for concern in the corporate world, is the continued difficulties of individual client machine maintenance, upgrades, and configuration.
3. What is Ad-Hoc Mode in Wireless Networking?
On wireless computer networks, ad-hoc mode is a method for wireless devices to directly communicate with each other. Operating in ad-hoc mode allows all wireless devices within range of each other to discover and communicate in peer-to-peer fashion without involving central access points (including those built in to broadband wireless routers).
To set up an ad-hoc wireless network, each wireless adapter must be configured for ad-hoc mode versus the alternative infrastructure mode. In addition, all wireless adapters on the ad-hoc network must use the same SSID and the same channel number.
An ad-hoc network tends to feature a small group of devices all in very close proximity to each other. Performance suffers as the number of devices grows, and a large ad-hoc network quickly becomes difficult to manage. Ad-hoc networks cannot bridge to wired LANs or to the Internet without installing a special-purpose gateway.
Ad hoc networks make sense when needing to build a small, all-wireless LAN quickly and spend the minimum amount of money on equipment. Ad hoc networks also work well as a temporary fallback mechanism if normally-available infrastructure mode gear (access points or routers) stop functioning.
5. Suggested Conclusion
You can write network servers or client applications that read from and write to other systems. A server or client application is usually dedicated to a single service such as Hypertext Transfer Protocol (HTTP) or File Transfer Protocol (FTP). Using server sockets, an application that provides one of these services can link to client applications that want to use that service. Client sockets allow an application that uses one of these services to link to server applications that provide the service.