A central processing unit, also known as central processor unit, is the hardware inside a computer system that process instructions of a computer program by performing the simple arithmetical, logical, and input/output (I/O) operations of the system. The term has been utilized in the computer industry about since the early 1960s. The concept, design, and implementation of CPUs have altered over the period of its history, but the foundation of its operation is still un-change. On big machines, CPUs need at least one printed circuit boards. For the personal computers and small workstations, the CPU is installed into one silicon chip called a microprocessor. In the 1970s the microprocessor type of CPUs had nearly fully implement all other CPU implementations. Modern CPUs are in big scale integrated circuits in packages usually smaller than four centimeters square, with hundreds of connecting pins. Two usual things of a CPU are the arithmetic logic unit (ALU), which process arithmetic and logical operations, and the control unit (CU), which extracts instructions from memory and decodes then executes them, calling on the ALU when needed. Not every computational systems depend on a central processing unit. An array processor or vector processor has many parallel computing elements, with no unit to be known the “center”. For the distributed computing model, issues are corrected by a distributed interconnected set of processors. (Himes, A. 2012)
Answer for question 1
Computers such as the ENIAC (Electronic Numerical Integrator And Computer) need to be physically rewired to carry different operations, that results these machines to be known as “fixed-program computers.” Since the word “CPU” is basically known as a device for software (computer program) execution, the very first devices that could rightly be known as CPUs came with the arrival of the stored-program computer.
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
The concept of a stored-program computer was already existed in the design of J. Presper Eckert and John William Mauchly’s ENIAC, but was not included in the beginning so that it could be complete faster. On June 30, 1945, before ENIAC was created, mathematician John von Neumann distributed the paper called First Draft of a Report on the EDVAC (Electronic Discrete Variable Automatic Computer). It was the plan of a stored-program computer that should be finished in August 1949. EDVAC was made to carry out some amount of instructions (or operations) of various types. The instructions can be combined to make useful programs for the EDVAC to work. The programs made for EDVAC were saved in high-speed computer memory instead of specified by the physical wiring of the computer. This settle the problem of a serious limitation of ENIAC, which was the cquite an amount of time and effort needed to reconfigure the computer to carry out a new task. Using the von Neumann’s implementation, the program, or software, that EDVAC perform could be modified easily by changing the contents of the memory. (Himes, A. 2012)
Every of the computer designs of the beginning of year 1950s was a unique design. There were no upward-compatible devices or computer architectures with numerous, varying implementations. Programs designed for a machine might not function on another kind, even other kinds from the similar company. This was not a great drawback at that time due to there was not a huge body of software made to work on computers, so starting programming from the beginning was not a serious issue. The design flexibility of the time was very crucial, for designers were very restrictive by the cost of electronics, yet just started to discover about how a computer could best be organized. Certain fundamental features implemented during this time like the index registers (on the Ferranti Mark 1), a return-address storing instruction (UNIVAC I), immediate operands (IBM 704), and the detection of invalid operations (IBM 650). (http://www.inetdaemon.com/tutorials/computers/hardware/cpu/ 2012)
By the late of the year 1950s commercial builders had made factory-constructed, truck-deliverable computers. The most well known installed computer was the IBM 650, which used drum memory into the programs that were loaded using either paper tape or punched cards. Certain very high-end machines also utilize core memory which results in higher speeds. Hard disks were also start to become more widely use. (http://www.webopedia.com/TERM/C/CPU.html 1970)
A computer is an automatic abacus. The type of number system will result the way it operates. In the early 1950s majority computers were made for specific numerical processing operations, and many machines utilized decimal numbers as their basic number system. That is the mathematical functions of the machines laboured in base-10 instead of base-2 as is general today. These were not solely binary coded decimal. Most machines usually had ten vacuum tubes per digit in each register. (Himes, A. 2012)
At the end of year 1970, main computer languages were not able to standardize their numeric behavior due to decimal computers had groups of users too big to alienate. Even when designers utilize the binary system, they still had many strange ideas. Some used sign-magnitude arithmetic (-1 = 10001), or ones’ complement (-1 = 11110), instead of modern two’s complement arithmetic (-1 = 11111). Majority computers used 6-bit character sets, due to they moderately encoded Hollerith cards. It was a serious revelation to designers of this period to be aware that the data word should be a multiple of the character size. They started to make computers with 12, 24 and 36 bit data words. (RMI Media Productions. 1979)
As opposed to contemporary CPUs which was from the year 1990 until today, the design and growth of the CPU has new execution and levels which makes modern CPU more quicker, small and efficient in comparison to the early designs of CPU. One of the implementation is multi-threading. Present designs perform best when the computer is operating only an application, however almost every current operating-system permit the user to perform several applications at the exact time. For the CPU to alter over and do task on another program needs costly context switching. In comparison, multi-threaded CPUs can manage instructions from several applications at once.
To do this, this kind of CPUs involve numerous sets of registers. When a context switch takes place, the contents of the “working registers” are merely duplicated into one of a set of registers for this intent. This kind of designs usually involve thousands of registers rather than hundreds as in a typical design. On the disadvantage, registers are likely to be somewhat costly in chip space required to implement them. This chip space could otherwise be utilized for some other function. Second implementation is multi-core. Multi-core CPUs are commonly multiple CPU cores on the similar die, linked to each other through a shared L2 or L3 cache, an on-die bus, or an on-die crossbar switch. Every of the CPU cores on the die share interconnect components with which to interface to the other processors and the rest of the system. These components might consist of a front side bus interface, a memory controller to interface with DRAM, a cache coherent connected to other processors, and a non-coherent connected to the southbridge and I/O devices. The words multi-core and MPU (which is Micro-Processor Unit) have come into common usage for an individual die that consists of multiple CPU cores. Thirdly is very long instruction word(VLIW) and Explicitly Parallel Instruction Computing (EPIC). VLIW relates to a processor architecture made to utilize the advantage of instruction level parallelism (ILP). Whilst conventional processors typically only permit programs that specify instructions to be carried out one after another, a VLIW processor permit programs that can clearly specify instructions to be performed at the exact time (i.e. in parallel). This kind of processor architecture is meant to enable higher performance without the inherent sophistication of some other ways. Intel’s Itanium chip is based on what they call an EPIC design. This design supposedly offers the VLIW benefit of enhanced instruction throughput. Nevertheless, it prevents some of the problems of scaling and complexity, by clearly giving in each “bundle” of instructions information concerning their dependencies. This information is calculated by the compiler, as it would be in a VLIW design. The initial versions are also backward-compatible with existing x86 software by means of an on-chip emulation mode. Integer performance was not good and regardless of enhancements, sales in volume markets continue to be low.
One of the earliest CPU was the UNIVAC I (Universal Automatic Computer I) in the year 1951 and the speed of this CPU was 0.0008 IPS (Instructions per second). As in year 2011, one of the fastest personal computer CPUs was the Intel Core i7 Extreme Edition 3960X which has a staggering speed of 53.3 IPS. Compared to the early CPU like the UNIVAC I, the latest CPU is at least sixty-six times faster. (Mostafa, E. and Hesham. 2005)
Conclusion for question 1
Central processing unit (CPU) is a very important component in a computer because it process instructions of a computer program by performing the simple arithmetical, logical, and input/output (I/O) operations of the system. That is why CPU also known as the brain of the computer. The CPU has rich in history since the year 1945 before the CPU term had been use and the design and implementation of the CPU had improved tremendously over the years, thus, becoming more powerful and efficient. CPU had been used in various type of computers, from personal computer to super computer.
Introduction for question 2
Speaking of computer architecture, a bus is a subsystem that moves data among elements within a computer, or between computers. Initial computer buses were parallel electrical wires with several connections, but the term is now applied for any physical layout that offers the similar logical functionality as a parallel electrical bus. Current computer buses can use both parallel as well as bit serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or linked by switched hubs, as in the case of USB. Buses function in units of cycles, messages and transactions. Talking about cycles, a message needs an amount of clock cycles to be delivered from sender to receiver through the bus. Speaking of messages, these are logical unit of information. For instance, a write message contains an address, control signals and the write data. Speaking of transactions, a transaction comprises of a sequence of messages which collectively form a transaction. For instance, a memory read needs a memory read message and a reply with the requested data. (http://www.webopedia.com/TERM/B/bus.html 2007)
Answer for question 2
Buses can be parallel buses, which transport data words in parallel on numerous wires, or serial buses, which transport data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in every direction generally indicates that mojority serial buses have extra conductors than the minimum of one utilized in 1-Wire and UNI/O. As data rates raise, the issues of timing skew, power usage, electromagnetic interference and crosstalk across parallel buses turn into more and more hard to circumvent. One partial solution to this issue is to double pump the bus. Usually, a serial bus can be worked at greater overall data rates than a parallel bus, regardless of having less electrical connections, due to the fact a serial bus basically has no timing skew or crosstalk. USB, FireWire, and Serial ATA are the likes of this. Multidrop connections will not perform properly for fast serial buses, so most contemporary serial buses utilize daisy-chain or hub designs. Traditional computer buses were bundles of wire that linked computer memory and peripherals. Anecdotally termed the “digit trunk”, they were known as after electrical power buses, or busbars. Almost often, there was single bus for memory, and one or more independent buses for peripherals. These were accessed by separate instructions, with entirely different timings and protocols. (Null, L., & Lobur, J. 2006)
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
One of the initial complications was the utilization of interrupts. Early computer programs carry out I/O by holding out in a loop for the peripheral to become prepared. This was a waste of time for program that had other tasks to perform. Also, if the program tried to carry out those other tasks, it may take too long for the program to check again, causing a loss of data. Engineers therefore set up for the peripherals to interrupt the CPU. The interrupts had to be prioritized, simply because the CPU will only perform code for one peripheral at a time, and some systems are more crucial than others. (Lochan, R. and Panigrahy. 2010)
High-end systems implemented the plan of channel controllers, which were primarily small computers committed to deal with the input and output of a given bus. IBM implemented these on the IBM 709 in 1958, and they became into a usual feature of their platforms. Other high-performance vendors like Control Data Corporation utilized identical designs. Commonly, the channel controllers would perform their very best to manage all of the bus operations internally, transferring data when the CPU was deemed to be busy elsewhere if likely, and only utilizing interrupts when necessary. This tremendously reduce CPU load, and allows outstanding all round system performance. To provide modularity, memory and I/O buses can be combined into a unified system bus. In this situation, a single mechanical and electrical system can be utilized to link together numerous of the system components, or in some instances, all of them. Later computer programs started to share memory common to some CPUs. Accessing to this memory bus needed to be prioritized, as well. The easy method to prioritize interrupts or bus access was with a daisy chain. In this scenario signals will normally pass through the bus in physical or logical order, eliminating of the need for complex scheduling. (Null, L., & Lobur, J. 2006)
A system bus is an independent computer bus that connects the primary components of a computer system. The method was created to cut down costs and boost modularity. It combines the functions of a data bus to transport information, an address bus to decide where it should be delivered, and a control bus to identify its function. Every mainboard has a set of wires running across it that interconnect all the devices and chips that are lugged into it. These wires are jointly known as bus. The amount of wires in the bus determines how wide the bus is. A data bus is a computer subsystem that enables for the transporting of data from one component to another on a motherboard or system board, or between two computers. This can involve transporting data to and from the memory, or from the central processing unit(CPU) to other components. Every one is made to manage a quantity bits of data at a time. The quantity of data a data bus can deal with is known as bandwidth. The data bus comprises of 8, 16, or 32 parallel signal lines. The data bus lines are bidirectional. Numerous devices in a system will have their outputs linked to the data bus, but only one device at a time will have its outputs enabled. Any device linked on the data bus must have three-state outputs so that its outputs can be disabled when it is not getting utilized to put data on the bus. An address bus is a computer bus architecture function to transport data between devices that are known by the hardware address of the physical memory (the physical address), which is kept in the form of binary numbers to allow the data bus to access memory storage. The address bus is utilized by the CPU or a direct memory access (DMA) enabled device to find the physical address to convey read/write commands. All address busses are read and written by the CPU or DMA in the form of bits. An address bus is part of the system bus architecture, which was created to reduce costs and improve modular integration. (Ram, B. 2007)
Nevertheless, majority of current computers use a wide range of single buses for certain tasks. An individual computer consists of a system bus, which links the main components of a computer system and has three primary elements, of which the address bus is one of them, together with the data bus and control bus. An address bus is gauge by the quantity of memory a system can access. A system with a 32-bit address bus can handle 4 gigabytes of memory space. More sophisticated computers utilize a 64-bit address bus with a supporting operating system able to deal with 16 gigabytes of memory locations, which is virtually infinite. A control bus is a computer bus that is utilized by the CPU to interact with devices that are contained inside the computer. This happens via physical connections such as cables or printed circuits. The CPU transfers a wide range of control signals to components and devices to transfer control signals to the CPU making use of the control bus. One of the primary goals of a bus is to reduce the lines that are required for communication. An individual bus enables communication among devices employing single data channel. The control bus is bidirectional and helps the CPU in synchronizing control signals to internal devices and external components. It is made up of interrupt lines, byte enable lines, read/write signals and status lines. Interaction between the CPU and control bus is required for operating an efficient and functional system. With the lack of control bus the CPU unable decide whether the system is obtaining or transmitting data. It is the control bus that manages which way the write and read information need to go. The control bus consists of a control line for write instructions and a control line for read instructions. When the CPU writes data to the main memory, it sends a signal to the write command line. The CPU also transmits a signal to the read command line when it requires to read. This signal allows the CPU to receive or transmit data from main memory. (Ram, B. 2007)
Conclusion for question 2
Bus in computer architecture is a very important component in a computer. A bus is a subsystem that moves data among elements within a computer, or between computers. A system bus is an independent computer bus that connects the primary components of a computer system and this method was created to cut down costs and boost modularity. It combines the functions of a data bus to transport information, an address bus to decide where it should be delivered, and a control bus to identify its function. One of the primary goals of a bus is to reduce the lines that are required for communication.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: