Development of Peer-to-Peer Network System
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Procedures which we are followed to success this project.
Task 01 - Familiarizing with the equipments & preparing an action plan.
Task 02 - Prepare the work area.
Task 03 - Fixed the hardware equipments and assemble three PCs.
Task 04 - Install NICs for each and every PC.
Task 05 - Cabling three computers and configure the peer to peer network with using hub or switch.
Task 06 - Install Windows operating system to each and every PC.
Task 07 - Install and configure the printer on one of the PCs.
Task 08 - Share printer with other PCs in the LAN.
Task 09 - Establish one shared folder
Task 10 - Create a test document on one of the PCs and copy the files to each of the other PCs in network.
Task 11 - Test the printer by getting the test document from each of the networked PCs.
Time allocation for the tasks.
Task No. Time allocation
Task 01 1 hour
Task 02 30 minutes
Task 03 1 ½ hour
Task 04 1 ½ hour
Task 05 1 ½ hour
Task 06 3 hour
Task 07 15 minutes
Task 08 15 minutes
Task 09 15 minutes
Task 10 10 minutes
Task 11 05 minutes
Total time allocation - 10 hours
Physical structure of proposed Peer to Peer network system.
In peer to peer network there are no dedicated servers or hierarchy among the computers. The user must take the decisions about who access this network.
In 1945, the idea of the first computer with a processing unit capable of performing different tasks was published by John von Neumann. The computer was called the EDVAC and was finished in 1949. These first primitive computer processors, such as the EDVAC and the Harvard Mark I, were incredibly bulky and large. Hundreds of CPUs were built into the machine to perform the computers tasks.
Starting in the 1950s, the transistor was introduced for the CPU. This was a vital improvement because they helped remove much of the bulky material and wiring and allowed for more intricate and reliable CPU's. The 1960s and 1970s brought about the advent of microprocessors. These were very small, as the length would usually be recorded in nanometers, and were much more powerful. Microprocessors helped this technology become much more available to the public due to their size and affordability. Eventually, companies like Intel and IBM helped alter microprocessor technology into what we see today. The computer processor has evolved from a big bulky contraption to a minuscule chip.
Computer processors are responsible for four basic operations. Their first job is to fetch the information from a memory source. Subsequently, the CPU is to decode the information to make it usable for the device in question. The third step is the execution of the information, which is when the CPU acts upon the information it has received. The fourth and final step is the write back. In this step, the CPU makes a report of the activity and stores it in a log.
Two companies are responsible for a vast majority of CPUs sold all around the world. Intel Corporation is the largest CPU manufacturer in the world and is the maker of a majority of the CPUs found in personal computers. Advanced Micro Devices, Inc., known as AMD, has in recent years been the main competitor for Intel in the CPU industry.
The CPU has greatly helped the world progress into the digital age. It has allowed a number of computers and other machines to be produced that are very important and essential to our global society. For example, many of the medical advances made today are a direct result of the ability of computer processors. As CPUs improve, the devices they are used in will also improve and their significance will become even greater.
The term Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987, but through its widespread adoption has also come to mean either an analogue computer display standard, the 15-pin D-sub miniature VGA connector or the 640×480 resolution itself. While this resolution has been superseded in the personal computer market, it is becoming a popular resolution on mobile devices.
Video Graphics Array (VGA) was the last graphical standard introduced by IBM that the majority of PC clone manufacturers conformed to, making it today (as of 2009) the lowest common denominator that all PC graphics hardware supports, before a device-specific driver is loaded into the computer. For example, the MS-Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and colour depth.
VGA was officially superseded by IBM's XGA standard, but in reality it was superseded by numerous slightly different extensions to VGA made by clone manufacturers that came to be known collectively as "Super VGA".
VGA is referred to as an "array" instead of an "adapter" because it was implemented from the start as a single chip (an ASIC), replacing the Motorola 6845 and dozens of discrete logic chips that covered the full-length ISA boards of the MDA, CGA, and EGA. Its single-chip implementation also allowed the VGA to be placed directly on a PC's motherboard with a minimum of difficulty (it only required video memory, timing crystals and an external RAMDAC), and the first IBM PS/2 models were equipped with VGA on the motherboard.
Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.
By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item. The word RAM is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash.
An early type of widespread writable random-access memory was the magnetic core memory, developed from 1949 to 1952, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay line memory, or various kinds of vacuum tube arrangements to implement "main" memory functions (i.e., hundreds or thousands of bits); some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and random-access register banks. Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.
As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as persistent storage in traditional computers. Many newer products instead rely on flash memory to maintain data when not in use, such as PDAs or small music players. Certain personal computers, such as many rugged computers and net books, have also replaced magnetic disks with flash drives. With flash memory, only the NOR type is capable of true random access, allowing direct code execution, and is therefore often used instead of ROM; the lower cost NAND type is commonly used for bulk storage in memory cards and solid-state drives.
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.
Types of RAM
Top L-R, DDR2 with heat-spreader, DDR2 without heat-spreader, Laptop DDR2, DDR, Laptop DDR
1 Megabit chip - one of the last models developed by VEB Carl Zeiss Jena in 1989
Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the higher possible average access performance while minimizing the total cost of entire memory system. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
A hard disk drive (often shortened as hard disk, hard drive, or HDD) is a non-volatile storage device that stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to the motorized mechanical aspect that is distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.
HDDs (introduced in 1956 as data storage for an IBM accounting computer) were originally developed for use with general purpose computers. During the 1990s, the need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAIDs, network attached storage (NAS) systems, and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. In the 21st century, HDD usage expanded into consumer applications such as camcorders, cell phones (e.g. the Nokia N91), digital audio players, digital video players, digital video recorders, personal digital assistants and video game consoles.
HDDs record data by magnetizing ferromagnetic material directionally, to represent either a 0 or a 1 binary digit. They read the data back by detecting the magnetization of the material. A typical HDD design consists of a spindle that holds one or more flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminium alloy or glass, and are coated with a thin layer of magnetic material, typically 10-20 nm in thickness with an outer layer of carbon for protection. Older disks used iron (III) oxide as the magnetic material, but current disks use a cobalt-based alloy.
The platters are spun at very high speeds. Information is written to a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometres in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. Initially the regions were oriented horizontally, but beginning about 2005, the orientation was changed to perpendicular. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a highly localized magnetic field nearby. A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.
HD heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at, or close to, the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. It's a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom-thick layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, as of 2007 the technology was used in many HDDs.
The grain boundaries turn out to be very important in HDD design. The reason is that, the grains are very small and close to each other, so the coupling between adjacent grains is very strong. When one grain is magnetized, the adjacent grains tend to be aligned parallel to it or demagnetized. Then both the stability of the data and signal-to-noise ratio will be sabotaged. A clear grain perpendicular boundary can weaken the coupling of the grains and subsequently increase the signal-to-noise ratio. In longitudinal recording, the single-domain grains have uniaxial anisotropy with easy axes lying in the film plane. The consequence of this arrangement is that adjacent magnets repel each other. Therefore the magnetostatic energy is so large that it is difficult to increase areal density. Perpendicular recording media, on the other hand, has the easy axis of the grains oriented to the disk plane. Adjacent magnets attract to each other and magnetostatic energy are much lower. So, much higher areal density can be achieved in perpendicular recording. Another unique feature in perpendicular recording is that a soft magnetic underlayer are incorporated into the recording disk.This underlayer is used to conduct writing magnetic flux so that the writing is more efficient. This will be discussed in writing process. Therefore, a higher anisotropy medium film, such as L10-FePt and rare-earth magnets, can be used.
Opened hard drive with top magnet removed, showing copper head actuator coil (top right).
A hard disk drive with the platters and motor hub removed showing the copper colored stator coils surrounding a bearing at the center of the spindle motor. The orange stripe along the side of the arm is a thin printed-circuit cable. The spindle bearing is in the center.
A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat 'U'-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side.
The head support arm is very light, but also rigid; in modern drives, acceleration at the head reaches 250 Gs.
The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a thin neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet).
The voice coil, itself, is shaped rather like an arrowhead, and made of doubly-coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after its wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing racially outward along one side of the arrowhead and racially inward on the other produces the tangential force. (See magnetic field Force on a charged particle.) If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell. Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive," which is another term for a (non removable) type of hard disk drive. Invented by IBM, floppy disks in 8-inch (200mm), 5¼-inch (133.35mm), and 3½-inch (90mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have now been largely superseded by USB flash drives, external hard drives, CDs, DVDs, and memory cards (such as Secure Digital).
5¼-inch disk had a large circular hole in the center for the spindle of the drive and a small oval aperture in both sides of the plastic to allow the heads of the drive to read and write the data. The magnetic medium could be spun by rotating it from the middle hole. A small notch on the right hand side of the disk would identify whether the disk was read-only or writable, detected by a mechanical switch or photo transistor above it. Another LED/phototransistor pair located near the center of the disk could detect a small hole once per rotation, called the index hole, in the magnetic disk. It was used to detect the start of each track, and whether or not the disk rotated at the correct speed; some operating systems, such as Apple DOS, did not use index sync, and often the drives designed for such systems lacked the index hole sensor. Disks of this type were said to be soft sector disks. Very early 8-inch and 5¼-inch disks also had physical holes for each sector, and were termed hard sector disks. Inside the disk were two layers of fabric designed to reduce friction between the medium and the outer casing, with the medium sandwiched in the middle. The outer casing was usually a one-part sheet, folded double with flaps glued or spot-welded together. A catch was lowered into position in front of the drive to prevent the disk from emerging, as well as to raise or lower the spindle (and, in two-sided drives, the upper read/write head).
The 8-inch disk was very similar in structure to the 5¼-inch disk, with the exception that the read-only logic was in reverse: the slot on the side had to be taped over to allow writing.
The 3½-inch disk is made of two pieces of rigid plastic, with the fabric-medium-fabric sandwich in the middle to remove dust and dirt. The front has only a label and a small aperture for reading and writing data, protected by a spring-loaded metal or plastic cover, which is pushed back on entry into the drive.
Newer 5¼-inch drives and all 3½-inch drives automatically engages when the user inserts a disk, and disengages and ejects with the press of the eject button. On Apple Macintosh computers with built-in floppy drives, the disk is ejected by a motor (similar to a VCR) instead of manually; there is no eject button. The disk's desktop icon is dragged onto the Trash icon to eject a disk.
The reverse has a similar covered aperture, as well as a hole to allow the spindle to connect into a metal plate glued to the medium. Two holes bottom left and right, indicate the write-protect status and high-density disk correspondingly, a hole meaning protected or high density, and a covered gap meaning write-enabled or low density. A notch top right ensures that the disk is inserted correctly, and an arrow top left indicates the direction of insertion. The drive usually has a button that, when pressed, will spring the disk out at varying degrees of force. Some would barely make it out of the disk drive; others would shoot out at a fairly high speed. In a majority of drives, the ejection force is provided by the spring that holds the cover shut, and therefore the ejection speed is dependent on this spring. In PC-type machines, a floppy disk can be inserted or ejected manually at any time (evoking an error message or even lost data in some cases), as the drive is not continuously monitored for status and so programs can make assumptions that do not match actual status.
With Apple Macintosh computers, disk drives are continuously monitored by the OS; a disk inserted is automatically searched for content, and one is ejected only when the software agrees the disk should be ejected. This kind of disk drive (starting with the slim "Twiggy" drives of the late Apple "Lisa") does not have an eject button, but uses a motorized mechanism to eject disks; this action is triggered by the OS software (e.g., the user dragged the "drive" icon to the "trash can" icon). Should this not work (as in the case of a power failure or drive malfunction), one can insert a straightened paper clip into a small hole at the drive's front, there by forcing the disk to eject (similar to that found on CD/DVD drives). Some other computer designs (such as the Commodore Amiga) monitor for a new disk continuously but still have push-button eject mechanisms.
The 3-inch disk, widely used on Amstrad CPC machines, bears much similarity to the 3½-inch type, with some unique and somewhat curious features. One example is the rectangular-shaped plastic casing, almost taller than a 3½-inch disk, but narrower, and more than twice as thick, almost the size of a standard compact audio cassette. This made the disk look more like a greatly oversized present day memory card or a standard PC card notebook expansion card rather than a floppy disk. Despite the size, the actual 3-inch magnetic-coated disk occupied less than 50% of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks. Such mechanisms were largely responsible for the thickness, length and high costs of the 3-inch disks. On the Amstrad machines the disks were typically flipped over to use both sides, as opposed to being truly double-sided. Double-sided mechanisms were available but rare.
Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer, USB devices are incredibly simple we will look at USB ports from both a user and a technical standpoint. You will learn why the USB system is so flexible and how it is able to support so many devices so easily Anyone who has been around computers for more than two or three years know the problem that the Universal Serial Bus is trying to solve -- in the past, connecting devices to computers has been a real headache!
- Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed.
- Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases.
- Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computer's case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards.
The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer. Just about every peripheral made now comes in a USB version. A sample list of USB devices that you can buy today includes:
- Flight yokes
- Digital cameras
- Scientific data acquisition devices
- Video phones
- Storage devices such as Zip drives
- Network connections
In the next section, we'll look at the USB cables and connectors that allow your computer to communicate with these devices.
A parallel port is a type of interface found on computers (personal and otherwise) for connecting various peripherals. It is also known as a printer port or Centronics port. The IEEE 1284 standard defines the bi-directional version of the port.
Before the advent of USB, the parallel interface was adapted to access a number of peripheral devices other than printers. Probably one of the earliest devices to use parallel were dongles used as a hardware key form of software copy protection. Zip drives and scanners were early implementations followed by external modems, sound cards, webcams, gamepads, joysticks and external hard disk drives and CD-ROM drives. Adapters were available to run SCSI devices via parallel. Other devices such as EPROM programmers and hardware controllers could be connected parallel.
At the consumer level, the USB interface—and in some cases Ethernet—has effectively replaced the parallel printer port. Many manufacturers of personal computers and laptops consider parallel to be a legacy port and no longer include the parallel interface. USB to parallel adapters are available to use parallel-only printers with USB-only systems. However, due to the simplicity of its implementation, it is often used for interfacing with custom-made peripherals. In versions of Windows that did not use the Windows NT kernel (as well as DOS and some other operating systems)
Keyboard, in computer science, a keypad device with buttons or keys that a user presses to enter data characters and commands into a computer. They are one of the fundamental pieces of personal computer (PC) hardware, along with the central processing unit (CPU), the monitor or screen, and the mouse or other cursor device.
The most common English-language key pattern for typewriters and keyboards is called QWERTY, after the layout of the first six letters in the top row of its keys (from left to right). In the late 1860s, American inventor and printer Christopher Shoals invented the modern form of the typewriter. Shoals created the QWERTY keyboard layout by separating commonly used letters so that typists would type slower and not jam their mechanical typewriters. Subsequent generations of typists have learned to type using QWERTY keyboards, prompting manufacturers to maintain this key orientation on typewriters.
Computer keyboards copied the QWERTY key layout and have followed the precedent set by typewriter manufacturers of keeping this convention. Modern keyboards connect with the computer CPU by cable or by infrared transmitter. When a key on the keyboard is pressed, a numeric code is sent to the keyboard's driver software and to the computer's operating system software. The driver translates this data into a specialized command that the computer's CPU and application programs understand. In this way, users may enter text, commands, numbers, or other data. The term character is generally reserved for letters, numbers, and punctuation, but may also include control codes, graphical symbols, mathematical symbols, and graphic images.
Almost all standard English-language keyboards have keys for each character of the American Standard Code for Information Interchange (ASCII) character set, as well as various function keys. Most computers and applications today use seven or eight data bits for each character. For example, ASCII code 65 is equal to the letter A. The function keys generate short, fixed sequences of character codes that instruct application programs running on the computer to perform certain actions. Often, keyboards also have directional buttons for moving the screen cursor, separate numeric pads for entering numeric and arithmetic data, and a switch for turning the computer on and off. Some keyboards, including most for laptop computers, also incorporate a trackball, mouse pad, or other cursor-directing device. No standard exists for positioning the function, numeric, and other buttons on a keyboard relative to the QWERTY and other typewriting keys. Thus layouts vary on keyboards.
In the 1930s, American educators August Dvorak and William Dearly designed this key set so that the letters that make up most words in the English language are in the middle row of keys and are easily reachable by a typist's fingers. Common letter combinations are also positioned so that they can be typed quickly. Most keyboards are arranged in rectangles, left to right around the QWERTY layout. Newer, innovative keyboard designs are more ergonomic in shape. These keyboards have separated banks of keys and are less likely to cause carpal tunnel syndrome, a disorder often caused by excessive typing on less ergonomic keyboards.
Most computer monitors use a cathode-ray tube (CRT) as the display device. A CRT is a glass tube that is narrow at one end and opens to a flat screen at the other end. The CRTs used for monitors have rectangular screens, but other types of CRTs may have circular or square screens. The narrow end of the CRT contains a single electron gun for a monochrome, or single-colour monitor, and three electron guns for a colour monitor—one electron gun for each of the three primary colours: red, green, and yellow. The display screen is covered with tiny phosphor dots that emit light when struck by electrons from an electron gun.
Monochrome monitors have only one type of phosphor dot while colour monitors have three types of phosphor dots, each emitting red, green, or blue light. One red, one green, and one blue phosphor dot are grouped together into a single unit called a picture element, or pixel. A pixel is the smallest unit that can be displayed on the screen. Pixels are arranged together in rows and columns and are small enough that they appear connected and continuous to the eye.
Electronic circuitry within the monitor controls an electromagnet that scans and focuses electron beams onto the display screen, illuminating the pixels. Image intensity is controlled by the number of electrons that hit a particular pixel. The more electrons that hit a pixel, the more light the pixel emits. The pixels, illuminated by each pass of the beams, create images on the screen. Variety of colour and shading in an image is produced by carefully controlling the intensity of the electron beams hitting each of the dots that make up the pixels. The speed at which the electron beams repeat a single scan over the pixels is known as the refresh rate. Refresh rates are usually about 60 times a second.
Monochrome monitors display one colour for text and pictures, such as white, green, or amber, against a dark colour, such as black, for the background. Gray-scale monitors are a type of monochrome monitor that can display between 16 and 256 different shades of grey.
Manufacturers describe the quality of a monitor's display by dot pitch, which is the amount of space between the centres of adjacent pixels. Smaller dot pitches mean the pixels are more closely spaced and the monitor will yield sharper images. Most monitors have dot pitches that range from 0.22 mm (0.008 in) to 0.39 mm (0.015 in).
The screen size of monitors is measured by the distance from one corner of the display to the diagonally opposite corner. A typical size is 38 cm (15 in), with most monitors ranging in size from 22.9 cm (9 in) to 53 cm (21 in). Standard monitors are wider than they are tall and are called landscape monitors. Monitors that have greater height than width are called portrait monitors.
The amount of detail, or resolution, that a monitor can display depends on the size of the screen, the dot pitch, and on the type of display adapter used. The display adapter is a circuit board that receives formatted information from the computer and then draws an image on the monitor, displaying the information to the user. Display adapters follow various standards governing the amount of resolution they can obtain. Most colour monitors are compatible with Video Graphics Array (VGA) standards, which are 640 by 480 pixels (640 pixels on each of 480 rows), or about 300,000 pixels. VGA yields 16 colours, but most modern monitors display far more colours and are considered high resolution in comparison. Super VGA (SVGA) monitors have 1024 by 768 pixels (about 800,000) and are capable of displaying more than 60,000 different colours. Some SVGA monitors can display more than 16 million different colours.
A monitor is one type of computer display, defined by its CRT screen. Other types of displays include flat, laptop computer screens that often use liquid-crystal displays (LCDs). Other thin, flat-screen monitors that do not employ CRTs are currently being developed.
Printer, a computer peripheral that puts text or a computer-generated image on paper or on another medium, such as a transparency. Printers can be categorized in any of several ways. The most common distinction is impact vs. non impact. Impact printers physically strike the paper and are exemplified by pin dot-matrix printers and daisy-wheel printers; non impact printers include every other type of print mechanism, including laser, ink-jet, and thermal printers. Other possible methods of categorizing printers include (but are not limited to) the following:
Print technology: Chief among these, with microcomputers, are pin dot-matrix, ink-jet, laser, thermal, and (although somewhat outdated) daisy-wheel or thimble printers. Pin dot-matrix printers can be further classified by the number of pins in the print head: 9, 18, 24, and so on.
Character formation: Fully formed characters made of continuous lines (for example, those produced by a daisy-wheel printer) vs. dot-matrix characters composed of patterns of dots (such as those produced by standard dot-matrix, ink-jet, and thermal printers). Laser printers, while technically dot-matrix, are generally considered to produce fully formed characters because their output is very clear and the dots are extremely small and closely spaced.
Method of transmission: parallel (byte-by-byte transmission) vs. serial (bit-by-bit transmission). These categories refer to the means by which output is sent to the printer rather than to any mechanical distinctions. Many printers are available in either serial or parallel versions, and still other printers offer both choices, yielding greater flexibility in installation options.
Method of printing: Character by character, line by line, or page by page. Character printers include standard dot-matrix, ink-jet, thermal, and daisy-wheel printers. Line printers include the band, chain, and drum printers that are commonly associated with large computer installations or networks. Page printers include the electro photographic printers, such as laser printers.
Print capability: Text-only vs. text-and-graphics. Text-only printers, including most daisy-wheel and thimble printers and some dot-matrix and laser printers, can reproduce only characters for which they have matching patterns, such as embossed type, or internal character maps. Text-and-graphics printers—dot-matrix, ink-jet, laser, and others—can reproduce all manner of images by "drawing" each as a pattern of dots.
Mouse, a common pointing device, popularized by its inclusion as standard equipment with the Apple Macintosh. With the rise in popularity of graphical user interfaces (Graphical User Interface) in MS-DOS; UNIX, and OS/2, use of mice is growing throughout the personal computer and workstation worlds. The basic features of a mouse are a casing with a flat bottom, designed to be gripped by one hand; one or more buttons on the top; a multidirectional detection device (usually a ball) on the bottom; and a cable connecting the mouse to the computer. See the illustration. By moving the mouse on a surface (such as a desk), the user typically controls an on-screen cursor. A mouse is a relative pointing device because there are no defined limits to the mouse's movement and because its placement on a surface does not map directly to a specific screen location. To select items or choose commands on the screen, the user presses one of the mouse's buttons, producing a "mouse click."
Types of mouse - bus mouse; Mechanical Mouse; Optical Mouse; Optimechanical Mouse; Serial Mouse; Trackball.
Network Interface Card (NIC) with RJ-45 Ports
Network interface Card (NIC)
A Network interface Card (NIC) is a hardware device that handles an interface to a computer network and allows a network-capable device to access that network. The NIC has a ROM chip that contains a unique number, the multiple access control (MAC) Address that is permanent. The MAC address identifies the device uniquely on the LAN. The NIC exists on both the 'Physical Layer' (Layer 1) and the 'Data Link Layer' (Layer 2) of the OSI model.
Sometimes the words 'controller' and 'card' are used interchangeably when talking about networking because the most common NIC is the network interface card. Although 'card' is more commonly used, it is less encompassing. The 'controller' may take the form of a network card that is installed inside a computer, or it may refer to an embedded component as part of a computer motherboard, a router, expansion card, printer interface or a USB device.
A MAC address is a 48-bit network hardware identifier that is permanently set on a ROM chip on the NIC to identify that device on the network. The first 24-bit field is called the Organizationally Unique Identifier (OUI) and is largely manufacturer-specific. Each OUI allows for 16,777,216 Unique NIC Addresses. Smaller manufacturers that do not have a need for over 4096 unique NIC addresses may opt to purchase an Individual Address Block (IAB) instead. An IAB consists of the 24-bit OUI plus a 12-bit extension (taken from the 'potential' NIC portion of the MAC address.)
Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. Normally it is safe to assume that no two network cards will share the same address, because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each card at the time of manufacture.
Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newest PCI Express) bus. A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.
The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.
The 8P8C (8 Position 8 Contact, also backronymed as 8 position 8 conductor; often incorrectly called RJ45) is a modular connector commonly used to terminate twisted pair and multi conductor flat cable. These connectors are commonly used for Ethernet over twisted pair, Registered jacks and other telephone applications, RS-232 serial using the EIA/TIA 561 and Yost standards, and other applications involving unshielded twisted pair, shielded twisted pair, and multi conductor flat cable.
An 8P8C modular connector has two paired components: the male plug and the female jack, each with eight equally-spaced conducting channels. On the plug, these conductors are flat contacts positioned parallel with the connector body. Inside the jack, the conductors are suspended diagonally toward the insertion interface. When an 8P8C plug is mated with an 8P8C jack, the conductors meet and create an electrical connection. Spring tension in the jack's conductors ensures a good interface with the plug and allow for slight travel during insertion and removal.
Although commonly referred to as an RJ45 in the context of Ethernet and category 5 cables, it is technically incorrect to refer to a generic 8P8C connector as an RJ45. The registered jack (RJ) standard specifies a different mechanical interface and wiring scheme for a true RJ45 than TIA/EIA-568-B which is often used for modular connectors used in Ethernet and telephone applications. 8P8C modular plugs and jacks look very similar to the plugs and jacks used for FCC's registered jack RJ45 variants, although the true and extremely uncommon RJ45 is not compatible with 8P8C modular connectors. It neither uses all eight conductors (but only two of them for a pair of wires plus two for a programming resistor) nor does it fit into an 8P8C jack because the true RJ45 plug is "keyed"
Originally, there was only the true telephone RJ45. It is one of the many registered jacks, like RJ11, a standard from which it gets the "RJ" in its name. As a registered jack, true telephone RJ45 specifies both the physical connector and wiring pattern. The true telephone RJ45 uses a special, keyed 8P2C modular connector, with Pins 5 and 4 wired for tip and ring of a single telephone line and Pins 7 and 8 connected to a programming resistor. It is meant to be used with a high speed modem, and is obsolete today.
Telephone installers who installed true telephone RJ45 jacks in the past were familiar with the inner workings which made it RJ45, but their clients saw only a hole in the wall of a particular shape, and came to understand RJ45 as the name for a hole of that shape. When they found similar-looking connectors being used in entirely non-telephone applications, usually connecting computers, they called these "RJ45", too. This was therefore the so-called computer "RJ45".
Compounding the problem was the fact that the physical connectors indicated by true telephone RJ45 are not even compatible with computer "RJ45" connectors. True telephone RJ45 connectors are a special variant of 8P2C, meaning only the middle 2 positions have conductors in them, while pins 7 and 8 are shorting a programming resistor. Computer "RJ45" is 8P8C - all eight conductors are always present. Furthermore, true telephone RJ45 involves a "keyed" variety of the 8P body, which means it may have an extra tab that a computer "RJ45" connector is unable to mate with.
Because true telephone RJ45 never saw wide usage and computer "RJ45" has become well known today, computer "RJ45" is almost always what a person is referring to when they say "RJ45". Electronics catalogs not specialized to the telephone industry advertise 8P8C modular connectors as "RJ45". Virtually all electronic equipment that uses an 8P8C connector (or possibly any 8P connector at all) will document it as an "RJ45" connector.
Rounding out the confusion in "RJ45" naming is the fact that some people intend for the term to encompass not just the connector shape and size, but the wiring standard for it described by TIA/EIA-568-B as well. So one might find "Here is the pin out of an RJ45 jack."
8P8C are commonly used in computer networking and telephone applications, where the plug on each end is an 8P8C modular plug wired according to a TIA/EIA standard. Most network communications today are carried over Category 5e or Category 6 cable with an 8P8C modular plug crimped on each end. The 8P8C modular connector is also used for RS-232 serial interfaces according to the EIA/TIA-561 standard. This application is commonly used as a console interface on network equipment such as switches and routers. Other applications include other networking services such as ISDN and T1.In flood wired environments the center (blue) pair is often used to carry telephony signals. Where so wired, the physical layout of the 8P8C modular jack allows for the insertion of an RJ11 plug in the center of the jack, provided the RJ11 plug is wired in true compliance with the U.S. telephony standards (RJ11) using the center pair. The formal approach to connect telephony equipment is the insertion of a type-approved converter.
The remaining (brown) pair is increasingly used for Power over Ethernet. Legacy equipment may use just this pair; this conflicts with other equipment as manufacturers used to short circuit unused pairs to reduce signal crosstalk. Some routers/bridges/switches can be powered by the unused 4 lines — blues (+) and browns (-) — to carry current to the unit. There is now a standardized scheme for Power over Ethernet.
Different manufacturers of 8P8C modular jacks arrange for the pins of the 8P8C modular connector jack to be linked to wire connectors (often IDC type terminals) that are in a different physical arrangement from that of other manufacturers: Thus, for example, if a technician is in the habit of connecting the white/orange wire to the "bottom right hand" IDC terminal, which links it to 8P8C modular connector pin 1, in jacks made by other manufacturers this terminal may instead connect to 8P8C modular connector pin 2 (or any other pin).
Ethernet hub or switch
A network hub or repeater hub is a device for connecting multiple twisted pair or fiber optic Ethernet devices together and thus making them act as a single network segment. Hubs work at the physical layer (layer 1) of the OSI model. The device is thus a form of multiport repeater. Repeater hubs also participate in collision detection, forwarding a jam signal to all ports if it detects a collision.
Hubs also often come with a BNC and/or AUI connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. The availability of low-priced network switches has largely rendered hubs obsolete but they are still seen in older installations and more specialized applications.
A network hub is a fairly unsophisticated broadcast device. Hubs do not manage any of the traffic that comes through them, and any packet entering any port is broadcast out on all other ports. Since every packet is being sent out through all other ports, packet collisions result—which greatly impedes the smooth flow of traffic.
The need for hosts to be able to detect collisions limits the number of hubs and the total size of the network. For 10 Mbit/s networks, up to 5 segments (4 hubs) are allowed between any two end stations. For 100 Mbit/s networks, the limit is reduced to 3 segments (2 hubs) between any two end stations, and even that is only allowed if the hubs are of the low delay variety. Some hubs have special (and generally manufacturer specific) stack ports allowing them to be combined in a way that allows more hubs than simple chaining through Ethernet cables, but even so, a large Fast Ethernet network is likely to require switches to avoid the chaining limits of hubs.
Most hubs (intelligent hubs) detect typical problems, such as excessive collisions on individual ports, and partition the port, disconnecting it from the shared medium. Thus, hub-based Ethernet is generally more robust than coaxial cable-based Ethernet, where a misbehaving device can disable the entire collision domain. Even if not partitioned automatically, an intelligent hub makes troubleshooting easier because status lights can indicate the possible problem source or, as a last resort, devices can be disconnected from a hub one at a time much more easily than a coaxial cable. They also remove the need to troubleshoot faults on a huge cable with multiple taps.
Hubs classify as Layer 1 devices in the OSI model. At the physical layer, hubs can support little in the way of sophisticated networking. Hubs do not read any of the data passing through them and are not aware of their source or destination. Essentially, a hub simply receives incoming packets, possibly amplifies the electrical signal, and broadcasts these packets out to all devices on the network - including the one that originally sent the packet.
Technically speaking, three different types of hubs exist:
- Passive (A hub which does not need an external power source, because it does not regenerate the signal and therefore falls as part of the cable, with respect to maximum cable lengths)
- Active (A hub which regenerates the signal and therefore needs an external power supply)
- Intelligent (A hub which provides error detection (e.g. excessive collisions) and also does what an active hub does)
Passive hubs do not amplify the electrical signal of incoming packets before broadcasting them out to the network. Active hubs, on the other hand, do perform this amplification, as does a different type of dedicated network device called a repeater. Another, not so common, name for the term concentrator is referring to a passive hub and the term multiport repeater is referred to an active hub.
Intelligent hubs add extra features to an active hub that are of particular importance to businesses. An intelligent hub typically is stackable (built in such a way that multiple units can be placed one on top of the other to conserve space). It also typically includes remote management capabilities via Simple Network Management Protocol (SNMP) and virtual LAN (VLAN) support.
Historically, the main reason for purchasing hubs rather than switches was their price. This has largely been eliminated by reductions in the price of switches, but hubs can still be useful in special circumstances:
- For inserting a protocol analyzer into a network connection, a hub is an alternative to a network tap or port mirroring.
- Some computer clusters require each member computer to receive all of the traffic going to the cluster. A hub will do this naturally; using a switch requires special configuration.
- When a switch is accessible for end users to make connections, for example, in a conference room, an inexperienced or careless user (or saboteur) can bring down the network by connecting two ports together, causing a loop. This can be prevented by using a hub, where a loop will break other users on the hub, but not the rest of the network. (It can also be prevented by buying switches that can detect and deal with loops, for example by implementing the Spanning Tree Protocol.)
- A hub with a 10BASE2 port can be used to connect devices that only support 10BASE2 to a modern network. The same goes for linking in an old thicknet network segment using an AUI port on a hub (individual devices that were intended for thicknet can be linked to modern Ethernet by using an AUI-10BASE-T transceiver).
How to Install a Network Interface Card
- Step 1
Turn off the computer and unplug it from the wall outlet. Place the computer on a worktable. Open the computer case by unscrewing the two or four screws located on the back of the computer. The exact number of screws will depend on what kind of case the computer has.
- Step 2
Examine the motherboard to find out which types of card slots are open for the installation of the network card. Typically, there are three types of interface slots available on motherboards: ISA, PCA and AGP.
- Step 3
Check your network card for which type of slot it requires. Write down some of the information on the network card before installing. Write down the MAC address for this card. Unscrew the screw holding the plate over the unused slot on the motherboard and press the edge connector down in to the empty socket on the motherboard.
- Step 4
Press firmly to make sure that the card seats well. Replace the screw to hold the card in place and close the computer case in the reverse order of how you opened it.
- Step 5
Plug the computer back into the wall outlet and restart the computer. Wait for Windows to boot and to find the new hardware. If this occurs, then respond to its inquiries for a driver by placing the driver disk in the drive and clicking "Yes."
- Step 6
Open the Control Panel and locate the Network Connections icon. Find the network adapter, right click and choose "Properties." Look for the MAC address of the card and compare it with the number taken from the card before installation. They should be the same.
- Step 7
Check the services running on this network adapter. You should have the Client for Microsoft Networks and TCP/IP protocol checked and installed. If these are not present, install them at this time.
- Step 8
Take the LAN cable and connect one end in the back of the computer. Use the cable socket that is on the back of the network card just installed. Have your IT department configure your workstation for the workgroup your computer is a member of. If you are not joining a workgroup but are accessing the Internet via broadband, follow the instructions from your lecturer.
Software installation for the NIC
Then, double-click the icon of the control panel and clicks add/card. If asked, enter the Input & Output port that you chose before. Reboot windows, and then once again change the parameters, because it may not have accepted them. Then reboot again. Once you have finally restarted Windows, check the control panel. If the card appears with a yellow exclamation point, this means that there is a conflict, and you will have to change the IRQ.
How to install Windows XP
- Insert the WindowsXP CD into the CD-ROM drive.
- Restart the computer.
- When the message to Press any key to boot from CD... is displayed, quickly press any key. Setup begins.
- After Setup starts, several messages will flash across the bottom of the screen. These messages are only important under special circumstances, such as installing a particular hardware access layer, or loading a small computer system interface.
- Next, a screen appears that offers the following three options: Set up WindowsXP, Repair a WindowsXP installation, or Quit Setup. Press ENTER to select the first option.
- License Agreement appears next. Read the license agreement and follow the instructions to accept or reject the agreement. If your Windows CD is an upgrade CD, after accepting the agreement, you will be prompted to insert the CD of your previous operating system to verify that the previous version qualifies for upgrade to WindowsXP.
- If a screen appears showing an existing installation of WindowsXP, press ESC to continue installing a fresh copy of WindowsXP.
- At the next screen, you have the option of repartitioning your drive. It's a good idea to repartition if you want to merge several smaller partitions into one large one, or if you want to create several smaller partitions so that you can set up a multi boot configuration. If you want to repartition, follow the instructions to delete existing partitions, if needed, and then select unpartitioned space and press ENTER to proceed.
- Select the formatting method you would like to use, and then press ENTER. NTFS offers both enhanced formatting capabilities and security technologies.
- Setup will format the drive, copy initial Setup files, and restart the computer.
- After the computer restarts, we will again receive the message Press any key to boot from CD but we should ignore it so that you do not interrupt the current installation process.
- After another restart, the next part of Setup will begin.
- On the Regional and Language Options page, follow the instructions to add language support or change language settings, if desired.
- On the Personalize Your Software page, type your name and the name of your company or organization (if applicable).
- On the Your Product Key page, type the 25-character product key that came with your copy of WindowsXP.
- On the Computer Name and Administrator Password page, make up a computer name (if your network administrator gave you a name to use, type that). Then make up a password for the Administrator account on your computer. Type it once, and then confirm it by typing it again.
- On the Date and Time Settings page, make any changes that are necessary.
- On the Networking Settings page, if it appears, select typical settings (unless you plan to manually configure your networking components).
- On the Workgroup or Computer Domain page, click Next. If you want to add your computer to a domain, select the second option and fill in the domain name. (If you do this, you will be prompted for a user name and password.)
- Next, while Setup copies files to your computer and completes a few other tasks, you'll see a series of screens that tell you about new features in WindowsXP.
- Finally, your computer will restart. Again, ignore the message to press any key. After Setup completes, eject the CD from the CD-ROM drive.
How to connect three computers by using switch
- Power up the switch. Connect all the PCs to the switch with standard network cables.
- Put all the computers in the same workgroup:
- On each PC, right-click "My Computer"
- Click "Properties"
- Go to the "Computer Name" tab
- Click "Change" at the bottom
- Under "Member of" put a tick next to "Workgroup"
- Type in a name for the workgroup (use this name of each PC
- IP: 192.168.0.10
- Subnet Mask: 255.255.255.0
- IP: 192.168.0.11
- Subnet Mask: 255.255.255.0
- IP: 192.168.0.12
- Subnet Mask: 255.255.255.0
To do this, on each computer:
- Open Start > Control Panel > Network Connections
- Right-click "Local Area Connection"
- Under "This connection uses the following items" select "Internet Protocol (TCP/IP)" and click the "Properties" button
- Put a tick next to "Use the following IP Address" and type in the IP and subnet mask from above.
- There is no default gateway because your workgroup is not connected to a router. You also do not have a DNS server, so you will have to browse the workgroup by IP addresses and not by computer names.
Use correct cables and proper fitted.
You must see the green light when you connect with switch.
select IP address
give default Gateway 192.168.1.1
PC 1 2
PC 2 3
S mask 255.255.255.0
Then try to ping each other.
try to ping also gateway.
This must help you.
if it does not work, then look your switch, it might be faulty
This advise would be to check that all the cables are the correct type (straight through cables to connect the pc's to the switches) and make sure that the switches lights are turn green after a minute or so. If the lights on the switch are a different colour - consult the manual that came with the switch to determine the fault
If all lights indicate no problems - make sure that all the ip addresses on the pc's are in the same network address and have the same subnet mask
COM 01 - 192.168.1.1 subnet mask - 255.255.255.0
COM 02 - 192.168.1.2 subnet mask - 255.255.255.0
COM 3 - 192.168.1.3 subnet mask - 255.255.255.0
if you are using windows operating system (right click on "my computer" > properties > "computer name" tab > click "change" button and make sure the workgroup names are exactly the same on each PC.
How to Share Printer
- Go to Start and click Control Panel. Control Panel Window will appear, and then double click Printers and faxes.
- Printers and Faxes win
Cite This Dissertation
To export a reference to this article please select a referencing stye below:Reference Copied to Clipboard.Reference Copied to Clipboard.Reference Copied to Clipboard.Reference Copied to Clipboard.Reference Copied to Clipboard.Reference Copied to Clipboard.Reference Copied to Clipboard.