# Simple Computer Operates Fundamentally In Discrete Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The term Computer architecture refers to the structure and organization of computer hardware or system software. We can also say it is the architecture of a computer's system. InÂ computer scienceÂ andÂ computer engineering,Â computer architectureÂ orÂ digital computer organizationÂ is the conceptual design and fundamental operational structure of aÂ computerÂ system.

A normal computer containsÂ inputÂ devices (keyboard, mouse, etc.), aÂ computational unit, and output devices (monitors, printers, etc). The computational unit is the computer's heart, and usually consists of aÂ central processing unitÂ (CPU), aÂ memory, and an input/output (I/O) interface. Input or output devices that might be present on a given computer vary greatly.

## A simple computer operates fundamentally in discrete time.Â

A Computer is aÂ clockedÂ device in which computational steps occur periodically according to ticks of a clock. This description belies clock speed. For example when a person says, "I have a 1 GHz computer," he/she means that their computer takes 1 nanosecond to perform each task (step). That is incredibly fast! A "task" does not, unfortunately, necessarily mean a computation like an addition; computers break such computations down into several stages, which means that the clock speed need not express the computational speed. Computational speed is expressed in units of millions of instructions/second (Mips). Your 1 GHz computer (clock speed) may have a computational speed of 200 Mips.

## Computers perform integer (discrete-valued) computations.Â

Computer calculations can be numeric (obeying the laws of arithmetic), logical (obeying the laws of an algebra), or symbolic (obeying any law you like).Each computer instruction that performs an elementary numeric calculation (addition, a multiplication, or a division) does so only for integers. The sum or product of two integers is also an integer, but the quotient of two integers is likely to not be an integer. How does a computer deal with numbers that have digits to the right of the decimal point? This problem is addressed by using the so-calledÂ floating-pointÂ representation of real numbers. At its heart, however, this representation relies on integer-valued computations.

## The Role of Computer Architecture

The general role of computer Architecture is coordination of abstract levels of a processor under changing forces, involving design, measurement and evaluation. It also includes the overall fundamental working principle of the internal logical structure of a computer system.

Computer Architecture is concerned with how various gates, transistor are interconnected and are forced to do functions as per instructions given by assembly language programmer.

## Instruction Set Architecture

TheÂ Instruction Set ArchitectureÂ (ISA) refers to the part of the processor that is visible to the programmer or compiler writer. The ISA serves as the boundary between software and hardware. The Instruction Set architecture (ISA) of a processor can be described using 5 categories:

Operand Storage in the CPU

This is where the operands kept are other than in memory.

Number of explicit named operands

How many operands are named in a typical instruction?

Operand location

Can any ALU instruction operand be located in memory? Or must all operands be kept internally in the CPU?

Operations

What operations are provided in the ISA?

Type and size of operands

What is the type and size of each operand and how is it specified?

Of all the above the most distinguishing factor is the first one.

The 3 most common types of ISAs are:

StackÂ - The operands are implicitly on top of the stack.

AccumulatorÂ - One operand is implicitly the accumulator.

General Purpose Register (GPR)Â - All operands are explicitly mentioned, they are either registers or memory locations.

## Combinational and sequential Logic

Combinational (Combinatorial) logic refers to a digital logic function made of primitive logic gates (AND, OR, NOT, etc.) in which all outputs of the function are directly related to the current combination of values on its inputs. Any changes to the signals being applied to the inputs will immediately propagate through the gates until their effects appear at the outputs.

Sequential logic differs from combinational logic in that the output of the logic device is dependent not only on the present inputs to the device, but also on past inputs;Â i.e., the output of a sequential logic device depends on its present internal state and the present inputs. This implies that a sequential logic device has some kind ofÂ memoryÂ of at least part of its previous inputs).

UnlikeÂ Sequential Logic circuits who have outputs dependant on both their present inputs and their previous output state giving them some form ofÂ Memory, the outputs ofÂ Combinational Logic CircuitsÂ are only determined by the logical functions of their current input state, logic "0" or logic "1". This happens at any given instant in time as they have no feedback and any changes to the signals being applied to their inputs will immediately have an effect at the output. This means that in a Combinational Logic Circuit the output is dependant at all times on the combination of its inputs and if one of its inputs condition changes state so does the output as combinational circuits have "no memory", "timing" or "feedback loops".

## Combinational Logic Circuits

Combinational Logic Circuits

They are made up of logicÂ NAND,Â NORÂ orÂ NOTÂ gates that are combined together to produce more complicated switching circuits. These logic gates are the building blocks of combinational logic circuits. An example of a combinational circuit is a decoder, which converts the binary code data present at its input into a number of various output lines.

Combinational logic circuits can be simple or very complicated, any combinational circuit can be implemented with onlyÂ NANDÂ andÂ NORÂ gates because these are classed as universal gates. There are three main ways of specifying the function of combinational logic circuits and these are:

Truth Table

Â Â Truth tables provide a concise list that shows the output values in tabular form for each possible combination of input variables.

Boolean AlgebraÂ

Â Forms an output expression for each input variable that represents a logic "1"

Logic Diagram

Â Â Shows the wiring and connections of each individual logic gate that implements the circuit.

all three are shown below.

Combinational Logic

Combination logic circuits is made from individual logic gates and they can also be considered as "decision making circuits" and combinational logic is about combining logic gates together to process two or more signals in order to produce at least one output signal according to the logical function of each logic gate. Common combinational circuits made up from individual logic gates that carry out a desired application includeÂ Multiplexers,Â Demultiplexers,Â Encoders,Â Decoders,Â FullÂ andÂ Half AddersÂ etc.

## Sequential logic

Sequential logic elements perform as many different functions as combinational logic

A simple memory device can be constructed from combinational devices with which we are already familiar. By a memory device, we mean a device which can remember if a signal of logic level 0 or 1 has been connected to one of its inputs, and can make this fact available at an output. A very simple, but still useful, memory device can be constructed from a simple OR gate, as shown below.

\begin{figure}\begin{center} \begin{picture}(200,60) \par \put(0,20){ \begin{pic... ...,-15){\line(1,0){10}} \end{picture}} \end{picture}\end{center} \par \end{figure}

In this memory device, if A and Q are initially at logic 0, then Q remains at logic 0. However if the single input A ever becomes a logic 1, then the output Q will be logic 1 ever after, regardless of any further changes in the input at A. In this simple memory, the output is a function of the state of the memory element only; after the memory is written then it cannot be changed back. However, it can be read. Such a device could be used as a simple read only memory, which could be programmed'only once. Often aÂ state tableÂ orÂ timing diagramÂ is used to describe the behavior of a sequential device. FigureÂ belowÂ shows both a state table and a timing diagram for this simple memory. The state table shows the state which the device enters after an input (the next state''), for all possible states and inputs. For this device, the output is the value stored in the memory.

## Register

A register is a special, high-speedÂ storage area within theÂ CPU. AllÂ computer dataÂ must be represented in a register before it can be processed. A typical example is if two numbers are to be multiplied, both numbers must be in registers, and the results are also placed in a register. The register can contain theÂ addressÂ of a memoryÂ location and where data isÂ storedÂ rather than the actual data itself.

Processor registers top theÂ memory hierarchy and provides the fastest way for aÂ Central processing unit to access data. Processor Registers often refer only to a group of registers that are directly encoded as part of an instruction. In proper sense these are called the "architectural registers".

Allocating frequently used variables to registers can be critical to a program's performance. A compiler performs this action referred to as register allocationÂ in the code executionÂ phase.

Normally registers are measured by the number of bitsÂ they can hold, An example an 8-bit register which an hold 8 bits at a time and a 32-bit register which can hold 32 bits. Registers are now usually implemented as a register file, but they have also been implemented using individual flip-flops, high-speed core memoryÂ and other ways in various machines.

Processor often contains several kinds of registers that can be classified accordingly to their content or instructions that operate on them, and they are as follows:

User-accessible RegistersÂ - This are the most common division of user-accessible registers.

Data registers -Â They are used to hold numeric values such as intergerÂ and floating-point values. In some older and low end CPUs, a unique data register, called the accumulator, is used implicitly for many operations.

Address registers -Â They hold addresses and are used by instructions that indirectly access the memory.

Other processors contain registers that may only be used to hold an address or only to hold numeric values (used as index registers) others allow registers to hold either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an operand.

There are also stack pointers, which are sometimes referred to as stack register. The name given to a register that can be used by some instructions to maintain a stack.

In some architecture there areÂ model-specific registersÂ also known asÂ machine-specific registers, these store data and settings related to the processor itself. Since their meanings are attached to the design of a specific processor, they cannot be expected to remain standard between processor generations.