Electronic Numerical Integrator And Computer Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The abbreviation ENIAC stands for Electronic Numerical Integrator And Computer [3]. The ENIAC which was designed at the University of Pennsylvania was the first large scale general-purpose electronic computer. This proposal originated from John William Mauchly in 1942 and it was given more taught in early 1943 with the help of J. Presper Eckert which was one of the graduates of professor Machly [3]. US Army officers sponsored '$500,000 during World War II' because they needed a computer which could calculate trajectory tables for their new weapons [1]. The ENIAC managed to solve these calculations '5000 times faster than a human using a calculator.' It was even '50 times faster than the analog differential analyzer 'of which the army at that time had two [1]. In fact it managed to solve the trajectory in '25s, while the shell itself took 30s' to reach its target [1]. Marking the first time a super computer managed to calculate a non linear real-time process in shorter time than its real time. The ENIAC which consisted of '18,000vacum tubes' and operated at '100,000 pulses per second' was inaugurated on 'February 15, 1946' [1]. The ENIAC weighed 30 tons, and it occupied 1500 square feet of floor space. During operation it consumed 140 Kilowatts of power and was capable of 5000 additions per second. [3]

Electrical engineers in the United States showed a lot of interest in developing large scale computers. One of the most used instrument in the world is probably the ac calculating board which although is large-scale it is not actually a general purpose machine. Another device of this sort is the differential analyzer which originated in the electrical engineering department of Massachusetts institutes of technology. This large scale device main objective was to solve a set of ordinary differential equations. Also of importance where the group of business machines such as those of the International Business Machines Corporation, whose devices where used for addition, subtraction, multiplication, recording on punch cards and transferring these punch cards from one device to another. [1]

J. Presper Eckert was the Director of the ENIAC project which began on May 30, 1943 and John William Mauchly was his chief consultant. Eckert was an electronics expert, advising Mauchly what the expected output should be and also how to design jobs to complete in half of the time allowed by the clock cycle. The project provided work for ten engineers and also for woman which wired the circuits accordingly. Part of this super computer was retrieved in 1968 by Arthur W. Burks from a warehouse where the ENIAC was left. In order to remove the green oxide that coated its connections the ENIAC was taken to a carwash for stream cleaning and also sand blasting. As soon as it was reassembled it powered up and ran immediately. This portion of the ENIAC was left in the electrical engineering and computer science building till 1989. The ENIAC apart from its purpose to produce results during the wartime it also points the way towards improvement for future design [2].

Terms related to the ENIAC

Large-Scale: Large-scale does not actually mean large in physical size, although these types of computers require a considerable large room to be assembled in. Comparing the ENIAC to today's large-scale computer one finds that the ENIAC needs much more room space than the equivalent large-scale computers with today's technology. In fact one of the main aims when designing large-scale computers is to reduce the physical size and complexity. In this case large scale means that it can handle problems of rather large magnitude. Contrasting the ENIAC with a common desk-calculator which is a small-scale device whilst the ENIAC can perform thousands of mathematical operations without any human intervention therefore classifying it as a large-scale device. The ENIAC becomes useful when the particular problem in hand requires large amount of computational repetitions using large numbers. An example of such tasks is the solution of a differential equation or preparation of mathematical tables [2].

General-purpose: A machine like the ENIAC is classified as general-purpose since it can handle multiple types of different operations. On the other hand a desk calculator is also a general-purpose small-scale machine while an ac calculating board is a special-purpose large-scale calculator [2]. When the ENIAC was completed in 1946 the war was over and its first was to calculate a series of complex calculations which were used to determine the flexibility of the hydrogen bomb. The fact that the ENIAC was build for a purpose and used for another shows its characteristic as a general-purpose machine.

Continuous (analogue) versus Digital: The differential analyzer is an analogue device where its angular displacements of rotating shafts or other devices give direct results at each instant. Continuous variation on a desk calculator are impossible because of adjacent keys in a column differ by a unit and it is impossible to go between these values. Although is a digital device can differentiate or integrate using extremely small intervals of the intermediate variable. Although this variable will not change uniformly, but in steps, these steps can be taken so small that the overall errors are small as well. The ENIAC can handle numerical quantities of 10 significant figures and with a small modification also up to 20 significant figures. On the other hand analogue machines yield best five significant figures and to increase this requires a lot of design changes [2].

Electronic versus Mechanical: An electronic device means that all the calculations and control operations are performed inside the machine by the use of electronic circuits. "The ENIAC is the only electronic large-scale general-purpose computer now in operation."The use of electron tube circuits give rise to the word electronic to be used [2].

Amplitude versus Step Mechanisms: This term refers to internal operations of calculators not with continuous or digital machines. If the result of a calculation is indicated by the amplitude of some quantity it is classified as amplitude mechanism. On the other hand a step mechanism is one that is characterized by an on-off mechanism. The ENIAC`s basic elementary circuit is a pair of tubes arranged in an order that only one can be conducting at any instant. The first tube is conducting if the second tube is not [2].

Synchronous versus Sequential: A desk calculator has sequential operation because as soon as an arithmetic operation is complete a result is given for that specific operation. If an operation cannot begin until an integer multiple of some fixed time interval after the previous operation makes the device synchronous. The ENIAC is a synchronous type of device in which operations are controlled by a group of pulses which are repeated every 200 microseconds. An operation cannot start if not at the beginning of one of these 200-microsecond intervals [2].

Series versus Parallel Operation: Parallel operations occur if two or more arithmetic operations can be carried out simultaneously. If a large-scale machine has series operation means that only one arithmetic process can be carried out at an instant. The ENIAC can handle parallel operations but because of the high speed of electronic machines the trend is trialing more to series operations [2].

Decimal versus Binary: ENIAC operates on numbers in the way they occur in real life, or in other ways in decimal numbers. But on-off characteristics of relays are represented with 1s and 0s, generally 1 meaning on and 0 is off. This form of representation on the other hand is known as the binary system. So when the number is inputted in decimal form, the system performs an operation on the number which transforms it into binary notation. During operation it would be used in its binary form and only translated back to decimal form before or after recording [2].

Machine components

Arithmetic component

The ENIAC`s arithmetic component consisted of 20 accumulators which were used for addition or subtraction, also had a multiplier a divider square rooter, and three function tables which could hold functions which were called during the operation of a solution [2]. Today`s arithmetic component changed to what is commonly known the Arithmetic and Logic Unit (ALU). Its main function is still to perform arithmetic together with logical operations. The ALU is based on electronic components which operate by use of simple Boolean logic operations [3]. This would be discussed in further detail when we talk about the ENIACs accumulator [3].

Memory component

This consisted of the same 20 accumulators mention above. One of these accumulators where used to hold or store the number as long as it is not needed for any other purposes. The three

(Figure: showing a decade counter used in the ENIAC)

function tables which are memory devices and arithmetic devices as well. Another method was to send numbers that needed to be remembered to the output device and brought back when needed through the input device [2]. The basic element in today`s semi-conductor memory is the memory cell which has three specific properties. The first is that they exhibit two stable states, which represent binary 1 or 0. Second properties they are capable of being written into to set their state. Last but not least is that they are capable of being read in order to sense the state. RAM which stands for Random Access Memory provides the facilities of both writing and reading data into memory easily. RAM is volatile meaning that as soon a constant power supply is not supplied to it, it will lose the data stored on it. Having 2GB of RAM on normal PCs is no big issue and it doesn't cost a fortune. There is another type of memory which is Random Access Memory abbreviated as ROM [3]. On the other hand this memory is non volatile meaning it retains a permanent storage for data [3]. ROM has important applications such as microprogramming, library subroutines for frequently wanted functions. Another form of semiconductor memory is flash memory. Flash memory uses electrical erasing technology and it provides erasure of block of data in a single action [3]. Flash memory does not offer byte-level erasure and uses one transistor per bit to achieve high density [3]. Another important aspect which tries to give memory the speeds of today`s fastest memories available is cache memory. The main characteristic that cache memory has is size. Cache memory should be small enough so that the overall average cost per bit is still within range of that of semi conductor memories but in the mean time large enough so that the access time is mainly of cache alone [3]. Cache memory serves as an intermediate storage between the processor and memory. When a process needs a word from memory, first it is looked up in the cache and if found it is transferred directly to the processor in no time. If unfortunately it is not in the cache a block of words from main memory is copied into cache and then the word is transferred to the processor. There is also another architecture which is more used today. Instead of having a single cache it is possible to have what is known as the three-level cache organization [3]. L1 cache is the smallest but contains the most used words by the CPU, L2 is a bit larger but also slower. While L3 cache has the largest storage space it is the slowest. All three levels are much faster than semi-conductor memory [3 even diagram]. The memory`s technology used in the ENIAC was made up of electrostatic tubes whilst today we have introduced cores as memory technology. Even memory size has made an advance leap forward, from the 2-4Kilo Bytes which the ENIAC [2] had a typical Pentium 4 machine has 64Giga Bytes of addressable memory which sums up to 64Terra Bytes in virtual memory [3]. Virtual memory in very simple terms is a memory that can be seen by computer user as addressable main memory in which virtual addresses are mapped onto the real addresses. This is not actually limited by the size of main memory but by the addressing scheme used by the computer and by the number of auxiliary storage available. [3].

The control component

There are basically two parts; one is the control of basic operations without regard to the problem on the machine and the other is the control of a sequence of operations to perform a particular job. This sequence of operations is known as programming [2]. The ENIAC is capable of controlling the basic operations independent of the problem on the machine thanks to a series of pulses which are generated every 200 microsecond. This will be discussed in further detail in the accumulator section [2].

(Figure representing the pulses used in the ENIAC [2].)

In fact one of the main disadvantages that the ENIAC had was that it had to be programmed manually by setting switches and plugging or unplugging cables according to the operation at hand. The idea today has changed. Instead of programming manually today we follow the new stored program concept proposed by von Neumann and his colleagues in 1946, which was ready by 1952. This IAS model consists of three basic components which are main memory- stores both data and instructions; ALU- operates on binary numbers; and the control unit- which interprets the instruction from main memory and cause it to be executed [3].

Input and output devices

The way data is fed or obtained, what is known as input/output, from the device is independent of the computing device. Data may be recorded on punch tapes, magnetic tapes or punch cards, such as those used in business machines [2]. This needs an appropriate mechanism which signals the machine the data being supplied to it. The same reasoning holds when a data record or result needs to be read from a storage device. By certain circuits used in the ENIAC data can be read from a punch tape, magnetic tape or other medium [2]. Data storage today has made an enormous step forward. Instead of magnetic tape we have magnetic disks, which store data on a glass substrate coated with a magnetisable material and data is stored on multiple tracks of each surface [3]. One example of this type of storage is hard disk which cans store up to terabytes of data and floppy disks which aren't that popular nowadays [3]. A more popular and cheaper way to store data is using optical disk storage. The disk is formed of polycarbonate and digital information is recorded on this disk with a series of microscopic pits imprinted on the surface [3]. The area without pits is called lands. Information is than retrieved using a low powered laser which shines through the polycarbonate and according to the intensity of the reflected light a photo sensor converts it into a digital signal [3]. Unlike magnetic disks CD-ROMS (compact disks read only memory) does not organize information on concentric circles but has a single track which starts at the center and spirals out to the edge of the disk and once data is imprinted in the disk it cannot be erased. CD-ROMs have a capacity of about 680MegaBytes [3]. Another storage media on the same lines of CD-ROMs is DVD (digital versatile disk) which has huge storage capacity, still cheap and "takes video in the digital age" [3]. This amount of storage is achieved by bits being packed more closely; has a second layer of pits and lands on the first layer; and could also be double sided. Combining these three properties together DVDs can have a capacity of up to 17GigaBytes. [3].

The low speeds with which input and output devices operate are the main limiting factor of the speed of electronic computers. The machine is cable of obtaining a result in a blink of an eye but then it has to wait until the recording of the previous solution is complete before recording this solution [2]. ENIAC`s input and outputs methods are by International Business Machines punch cards [2]. The procedure to store data on these punch cards is as follow. First the result which is in electric form is translated into a set of mechanical relays which in turn causes the IBM punch cards to punch on a card the result. During this operation the ENIAC is processing without interruption until a new result is produced which needs to be recorded and the first recording hasn't completed [2]. The Function tables mentioned in the memory section could also have been used to feed the ENIAC with data at high speeds. The ENIAC had also a limited built in facility which could provide particular problems with commonly used constant such as Π very fast [2]. In fact today's computers try to avoid this delay in time by introducing I/O registers. The two I/O registers used are the I/O AR (address register) which identifies the particular device and I/O BR (buffer register) which main aim is to hold data when it is being exchanges by the device and the CPU [3]. I/O modules became processors on their own having a dedicated instruction set for the specific I/O module. This built in processor fetches and executes instructions without CPU intervention until obviously the entire operation is performed. These I/O modules also have local memory of their own and their processor takes care of the majority of the tasks involved in controlling the terminals [3].

Basic Characteristics:

Flexibility, accuracy, speed, reliability and capacity are the main features with which large scale computer devices are compared to each other. There are so many variations of these characteristics that one cannot just decide that a system is better than another because for example a system may lack on capacity but then is faster than another system. The most important use of computer is to solve a given problem in the minimum possible time and also at the cheapest price tag. "Time of solution" is the total time taken up by the system to solve a particular problem. The factors associated with the "time of solution" depend on the user or designer background [2].

Flexibility: for digital machines this means that the device can carry out a number of operations in order to obtain a solution for different variations of the same problem or even for different problems per se. Is quite the opposite definition of a system which is "wired" in a particular way restricting its ability to tackle various problems. The ENIAC which was classified as a general purpose machine in the previous pages is also flexible in the sense that it can handle all the arithmetic operations, also including differentiation and integration which are done by very accurate approximation methods which will suffice for most purposes. The majority of practical problems requiring numerical solution can be handled by a general purpose machine such as the ENIAC [2]. It goes without saying that this characteristic is also present in today`s general purpose machines.

Accuracy: This characteristic depends on the problem under solution and also on the device the problem is being solved on. This is especially essential when computations involve many differences of nearly equal numbers. The ENIAC has an accuracy of 10-digit numbers in almost all parts of the machine. Number 10 was chosen after analysis of a group of differential equations which had to be solved during the war and to insure that the result has 5-figure accuracy. Recall that in the ENIAC and general purpose machine thousand of operations take place before a definite solution is obtained. If during operation each number is given 11 significant figures and the all have the decimal point at the same position. The result could yield an error of order of magnitude 500 because the 11th digit is not taken into consideration. The ENIAC catered for this problem by rounding off but having said that accuracy is decreased inevitably. The ENIAC didn't include the floating decimal point concept which improves accuracy [2]. Fixed point notations offer the possibility to represent both positive and negative numbers. It assumes a fixed radix point and could also represent fractional component. The IEEE standard 754 adopted in 1985 defines both 32-bit single having 8-bit exponents and a 64-bit double format having 11-bit exponents [3].

Speed: Since the interconnection in the ENIAC are set up by hand, small patch cords are inserted in appropriate points in the machine and in the trunk system the time required to set the interconnections for different problems is longer than today`s computer which have instructions given by the input devices and used automatically by the program as the problem proceeds [2]. In the ENIAC the time required to achieve a solution depends on the experience of the operator, machine time is wasted during the setting up of the machine. This is an extremely important because it shows the speed with which the ENIAC performs an operation [2].

(Figure representing the ENIAC operating speeds [2].)

Since the ENIAC has high speed operation it hinders the importance of mathematical work which is often the most intensive part of the problem to arrive to a solution. A problem handled by the ENIAC would be up to 50% faster than one handled with hand-computed tool like converging series [2]. Today`s CPUs in order to achieve greater speed use a certain kind of technology known as pipelining. Pipelining involves the execution of multiple instructions all at different stages of completion. This depends on the organization of the processor [3].

Reliability: The designers of the machine have to ensure that the machine doesn't break down before a solution to the problem is achieved. To safeguard a result the ENIAC stored results at intermediate stages of the problem from which the result could be re-calculated given the case of an error, it still hinders the operation speed of the machine. The ENIAC as mentioned previously consisted of 18 000 vacuum tubes and if fail results in one of the tubes fail this doesn't stop the entire system [2]. The ENIAC while being used by the army for 11 months in Moore school was being monitored for errors which pop up. This monitoring resulted in a tube being replaced every 20 hours. It is important to remember that the ENIAC was put to work as soon as it was completed, implying there was no time for real testing and running in. The ENIAC has 500 000 soldered joints which where the cause of high intermediate errors because there wasn't enough time to test them. After some few days of monitoring it was ensured that the power is continuously supplied to heaters of the vacuum tubes and that every shutdown resulted in two or three tubes failing. Various methods where used to ensure that correct results were achieved. First was that before and after each run there was a problem test run, also many problems were self checkable since parts of them would also result in a particular value, known beforehand. Thus the program contained a self checking test when each of these parts where encountered. If at any stage a result doesn't match the expected one the program would automatically halt and print out the results obtained so far in order for the operator to be able to examine these results [2]. Apart from all these ways to ensure a correct answer, the same problem was also run twice in order to ensure a correct result after comparing the two results [2]. Half of the time that the ENIAC was in operation it wasn't producing any productive output; this time was used for setting up the system to be able to deal with the problem at hand. Half of this unproductive time was spent maintain and repairing the machine [2]. Apart from vacuum tubes and bad soldering joints there were other classes of errors like for example failure in electromechanical input and output equipment. DC power supply and proper ventilation where another two major failures which the ENIAC had during its time [2]. Today computers seem to be more reliable then the ENIAC was. One of the main advantages is that they are intensively tested before they are brought to the market. Hardware equipment which is used to ensure continuous power supply in the case of power failure is the UPS (uninterrupted power supply). This works by storing energy in its batteries and in the case of power failure it gives you enough time to save your work instead of everything being wiped out. The hamming code and parity bits are two of the most commonly used error correction and detection techniques at the moment [3].

Capacity: When discussing large-scale machines this term relates to the magnitude of the problem it is able to handle. The ENIAC has not given much importance to automatic programming and it was kept simple, but having said that it had a lot of interconnections which resulted in programming not being the limiting factor of capacity [2]. The ENIAC used output devices to store intermediate results and at the end they were inserted back into the machine by input devices. It has already been pointed out that input and output requires a lot of time to complete [2].


An accumulator's main objective is to perform arithmetic operations, such as addition and subtraction [3]. It can also store numbers which need to be transferred to another location or on which to perform arithmetic operations. The ENIAC had 10 accumulators each along each other which made the ENIAC capable of dealing with 10 digit numbers, as the diagram shows. Only one of these could be selected at once and it would be form what is called a column. In order to trigger this column the ENIAC used a ring counter which could set any accumulator in the abnormal state. [2]

The ENIAC`s accumulator had other internal circuits like a carryover circuit and a pulse shaping waveform which ensure uniform pulses. The carryover circuits where essential because when a digit in an accumulator reached 9 it was essential that by the next pulse received it should turn to 0 and increment the next digit accumulator by 1. To be more efficient the ENIAC instead of consisting of increment and decrement counters, for subtraction it used the complements of the negative numbers with respect to the decimal base (1010) [2]. Complements where easily achieved in the ENIAC and using them didn't require great circuit complexity [2]. In order to be able to distinguish between positive and negative numbers the ENIAC added a prefix of P to plus numbers while an M to indicate a negative number [2].

Each counter in this accumulator wasn't a real counter since it stores the value at that position and then just adds all the numbers in the same column at the same time. Therefore the time required to count from 0 to 9 plus the time if a carryover is produced. Nine pulses of 2 microseconds each where needed in order to carry out addition and subtraction while an additional 110 microseconds in the case of a carry over.

Conclusion: past-Present-Future

During this assignment I have learnt a lot about the history of computers. The first pioneering

electronic machine which kick started the computer era was by no doubt the ENIAC. This was made up of 18000 vacuum tubes [1] and was designed specifically to solve the trajectory motion of the artillery shell [1]. Proving itself as general purposes machine since by the time it was complete the war was over and it was used to solve equations related to the hydrogen bomb [2]. The ENIAC weighed 30 tons, and it occupied 1500 square feet of floor space [3]. Today computers are also general purpose machines like the ENIAC was [3]. Instead of weighing tons it weighs only a couple of kilograms and occupies only some area from your desk. As we go along computer are becoming smaller and smaller using complex electric circuits.

We have seen that from vacuum tubes and punch cards as a way to store data and information we have moved to microelectronic technology using transistors and wires on PCBs, also different storage devices like magnetic disks flash memory and so on. Today's computers are thoroughly tested and they guarantee the same result every time the same problem is run. The connections inside modern computers are all electronic with no need to connect or disconnect any wires to complete different tasks. Thanks to Moore and others computers are continue getting faster, smaller and most important of all affordable.

The next step is without any doubt optical, DNA and quantum computers [4]. Here is just a brief idea of what next generation computers might be like. Firstly there is optical computers which utilize crystals and metamaterials to control light particles and operate using these light particles or photons as you might call them.[4] The next type is DNA computing which uses atoms to perform mathematical calculations and store information [4]. DNA holds the complex blue prints of living organisms and has an enormously large storage capacity, for example 1 gram of DNA can store as much as one trillion CD-ROMS [4]. Last but not least are Quantum computers which perform operations on data by making direct use of distinctively quantum mechanical phenomena. Instead of bits, quantum computers measure data in qubits.[4]