Microprocessors

Published:

ABSTRACT-

MICROPROCESSOR -

Microprocessors have evolved dramatically in the twelve years of their development by the useing of large numbers of devices on single chips. Word widths have increased along with microprogramming capabilities, addressing capabilitie, and trends toward single-chip computers. one year after the introduction of 4-bit processors came the 8-bit processor. Three years later, in 1974, 16-bit processors became the most leading technology. The article compares architectures & operations of the TI 9900 Intel 8086, Zilog Z8000, Motorola 68000 & NSC 16032 16-bit microprocessors. The 32-bit processor was introduced in 1981.General trends in architecture, technology ,principles of operation, register organization, instruction set, memory organization, and performance are examined in the Bellmac-32A chip, the HP 32-bit processor, and the IAPX 432 processor chip set. With the development of advanced processors have come improvements & evolution of operating systems and processor communications. Multiprocessing & special-purpose processors have also improved computing technology. The processor technologies are illustrated by diagrams & compared in tables.

Supercomputer

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

A supercomputer is a computer which performs any task at a rate of high speed which is far above that of other computers. The computing world constantly changing, it should come as no surprise to learn that most supercomputers bear their superlative titles for a few years, at best. Computer programmers are fond of saying that today's supercomputer will become tomorrow's computer; the laptop we are reading this article on is probably more powerful than most historic supercomputers,laptops.

INTRODUCTION

Microprocessor

Microprocessor incorpoates most or all of the functions of CPU on a single integrated circuit. The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc, followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.

Computer processors were for a long period constructed out of small and medium-scale ICs containing the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete, with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.

8-Bit designs

The 4004 was later followed in 1972 by the 8008, it is the first 8-bit microprocessor. These processors are the precursors to the very successful Intel 8080 (1974), Zilog Z80 (1976), & derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1976 and the similar MOS Technology 6502 in 1976 designed largely by the same people. The 6502 rivaled the Z80 in popularity during the 1980s.

A low overall cost, small packaging, simple computer bus requirements, and sometimes circuitry otherwise provided by external hardware the Z80 had a built in memory refresh allowed the home computer "revolution" to accelerate sharply in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, this was sold US$99.

The Western Design Center, Inc. introduced the CMOS 65C02 in 1983 and licensed the design to several firms. It was used as the CPU in the Apple IIC and IIE personal computers as well as in medical implantable grade pacemakers and defibrilators, automotive, industrial & consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property providers in the 1990's.

Motorola introduced the MC6809 in 1978, this is an ambitious and thought through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. Subsequent 16-bit microprocessors typically used microcode to some extent as design requirements were getting too complex for purely hard-wired logic only.

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. 8086

A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 AKA CDP1802, RCA COSMAC was introduced in 1976 which was used in NASA's Voyager & Viking spaceprobes of the 1970s and onboard the Galileo probe to Jupiter launched 1989 arrived 1995. RCA COSMAC was the first to implement C-MOS technology. The CDP1802 was used because it can be run at very low power, and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.

16-bit designs

The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.

Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation in the LSI-11 OEM board set and the packaged PDP 11/03 Minicomputer, and the Fairchild Semiconductor MicroFlame 9440, both of which were launched in the 1975 to 1976 timeframe.

The first single-chip 16-bit microprocessor was TI's TMS 99OO, which was also compatible with their TI-9900 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, & the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.

The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.

Intel followed a different path, having no minicomputers to emulate, & instead "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the X86 family which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, & succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel was released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility.

The integrated microprocessor memory management unit was developed by Childs et al. of Intel, and awarded US patent number 4,442,484.

32-bit designs

A supercomputer is a computer that is at the 1st position of current processing capacity, particularly in speed calculation. Supercomputers were invented in the 1961s & were designed primarily by seymour cray at control Data Corporation, & led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of competitors entered in the market, in parallel to thecreation of the Minicomputer market a decade earlier, but many of these disappeared in the mid-1990s ‘supercomputer market crash'.

Today supercomputers are typically one-of-a-kind custom designs produced by ‘traditional' companies for eg Cray, IBM & Hewlett-Packard, these had purchased many of the 1980s companies to gain their experience. In July 2009 the IBM Roadrunner, located at Los Alamos National Laboratory, is the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer become tomorrow's ordinary computer. CDC's early machines having simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s all supercomputers were dedicated to running a vector processor, & a lot of newer players was developed their own such processors at a lower price to enter the market. The early & mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of 4 to 16. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of ‘ordinary' CPUs, some being off the shelf units & others being custom designs.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Generally, parallel designs are make on the base of "off the shelf" server-class microprocessors, for eg- the PowerPC, Xeon, Opteron and mostly most modern supercomputers are now highly-tuned computer clusters using product processors combined with custom interconnects.

supercomputers have certain distinctive features. Unlike conventional computers, they usually have more than one central processing unit which contains circuits for interpreting program commands and executing arithmetic and logic operations in proper sequence. The use of several CPUs to achieve high computational rates is necessitated by the physical limits of circuit technology. Electronic signals cannot travel faster than the speed of light, which thus constitutes a fundamental speed limit for signal transmission and circuit switching. This limit has almost been reached, owing to miniaturization of circuit components, dramatic reduction in the length of wires connecting circuit boards, and innovation in cooling techniques e.g., in various supercomputer systems, processor and memory circuits are immersed in a cryogenic fluid to achieve the low temperatures at which they operate fastest. Rapid retrieval of stored data and instructions is required to support the extremely high computational speed of CPUs. Therefore, most supercomputers have a very large storage capacity, as well as a very fast input/output capability.

Still another distinguishing characteristic of supercomputers is their use of vector arithmetic. They are able to operate on pairs of lists of numbers rather than on mere pairs of numbers. For example, a typical supercomputer can multiply a list of hourly wage rates for a group of factory workers by a list of hours worked by members of that group to produce a list of dollars earned by each worker in roughly the same time that it takes a regular computer to calculate the amount earned by just one worker. . Hardware and software designe

Supercomputers using custom CPUs conventionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as composite detail engineering. They made for specialized and certain types of computation, usually numerical calculations, and perform badly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data & commands at all times - in fact, the performance difference between slower computers and supercomputers are due to the memory hierarchy. Their I/O systems tend to be made to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems Amdahl's law applies and supercomputer designs dedicate great effort to eliminating software serialization & using hardware to address the remaining bottlenecks.

Supercomputer challenges, technologie

A supercomputer generates huge amounts of heat. Cooling most supercomputers is a major HAVC trouble.

Information cannot move quicker than the speed of light between two parts of a supercomputer. A supercomputer that is spread in long space must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds send a message between CPUs are typical.

Supercomputers devour and produce massive amounts of data in a very short period of time. According to Ken Batcher "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Processing techniques

Vector processing technique was first developed for supercomputers & continues to be used in expert high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures & SIMD processing instructions for general-purpose computers.

Modern video game consoles in picky use SIMD lengthily & this is the basis for some manufacturers' claim that their game apparatus are themselves supercomputers. Certainly, some graphics cards have the computing power of several TEREFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more complicated, graphics processing units (GPUs) have evolved to become more functional as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).

Supercomputer operating systems, today most often variants of Linux are at least as complex as those for smaller machines. Historically, their user interfaces are less developed, as the OS developers had limited programming resources to spend on non-essential parts of the OS. These computers, often priced at millions of dollars, are sold to a very small market and the R&D budget for the OS was often limited. The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.

Note that this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to in companies such as AMD and NVIDIA, these have been able to produce cheap, feature-rich, high-performance, & innovative products due to the vast number of consumers driving their R&D.

In anticipation of the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility & code portability for performing. For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone have at least 6 different proprietary OSs largely unknown to the general computing community. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. In the future, the highest-performance systems are likely to use a variant of Linux but with incompatible system-unique features (especially for the highest-end systems at secure facilities).

Programming

See full size imageThe parallel constructions of supercomputers often dictate the use of particular programming techniques to develop their speed. The base language use by supercomputer for to develop code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely associated cluster and Open MP for strongly coordinated shared memory machines are used. Significant effort is necessary to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.

Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, Werewolf, and openMosix, which facilitate the formation of a supercomputer from a collection of ordinary workstations or servers. Technology like Zircon Rendezvous/Bonjour can be used to create ad hoc computer clusters for particular software such as Apple's Shake compositing application. An easy programming idiom for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates troublesome.

Modern supercomputer architecture

LANL the CPU Architecture- IBM Roadrunner Share of Top500 Rankings between 1993 and 2009: X 86 families includes X86-64.

Supercomputers today often have a similar top-level structural design consisting of a assemble of MIMD multiprocessors. Each processor of which is SIMD. The supercomputers vary fundamentally with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous commands per SIMD processor. Within this hierarchy we have:

-- Cluster is a set of computers that are highly interconnected via a high-speed network and switching fabric. Each computer runs under a separate instance of an Operating System (OS).

--A multiprocessing computer is a computer, operating under a single OS & using in addition than one CPU, wherein the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing & Non-Uniform Memory Access .

A SIMD processor executes the similar instruction on more than one set of data in same time. The processor could be a general purpose product processor or special-purpose vector processor. It could also be high-performance processor or a low power processor. As of 2007, the processor executes several SIMD commands for every nanosecond.

As of November 2009 the fastest supercomputer in the world is the Cray XT5 Jaguar system at National Center for Computational Sciences with more than 19000 computers and 224,000 processing elements, based on standard AMD

rocessors. The fastest diverse machine is IBM Roadrunner. This machine is a group of 3240 computers, each with 40 processing cores & includes both AMD and Cell processors. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.

IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer, In February 2009. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2012.

Moore's Law and economy of scale are the leading factors in supercomputer design: a single modern desktop PC is now more powerful than a ten-year-old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporary desktop machines have now been included into commodity PCs. in addition, the costs of chip development & production make it wasteful to design custom chips for a small run and favor mass-produced chips that have enough demand to earn the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars. Supercomputing is taking a step of rising density, allow for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.

In addition, many problems carried out by supercomputers are particularly suitable for parallelization and and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.

Common uses

Supercomputers are use for particularly high calculation-intensive tasks such as mechanical physics, weather forecasting, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, problems involving quantum ,polymers, and crystals), physical simulations such as simulation of airplanes in wind tunnels, climate research, simulation of the detonation of nuclear weapons, & research into nuclear fusion, A particular class of troubles, known as Grand Challenge problems, are troubles whose full solution requires semi-infinite computing resources.

Significant here is the distinction between ability computing and capacity computing, as defined by Graham et al. ability computing is typically thought of as using the maximum computing power to resolve a large difficulty in the shortest amount of time. Often a ability system is able to solve a trouble of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using resourceful cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

REFERNCE-

1. GOOGLE WIKIPIDEYA.COM

2. ANSWER.COM

3. U.S .SHAH

4. V.RAM