0115 966 7955 Today's Opening Times 10:00 - 20:00 (BST)

Development of VLSI Technology

Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

CHAPTER 1

1. INTRODUCTION

The VLSI was an important pioneer in the electronic design automation industry. The “lambda-based” design style which was advocated by carver mead and Lynn Conway offered a refined packages of tools.. VLSI became the early hawker of standard cell (cell-based technology). Rapid advancement in VLSI technology has lead to a new paradigm in designing integrated circuits where a system-on-a-chip (SOC) is constructed based on predesigned and pre-verified cores such as CPUs, digital signals processors, and RAMs. Testing these cores requires a large amount of test data which is continuously increasing with the rapid increase in the complexity of SOC. Test compression and compaction techniques are widely used to reduce the storage data and test time by reducing the size of the test data.

The Very large scale integration design or manufacturing of extremely small uses complex circuitry of modified semiconductor material.

In 1959- jack St. Claire Kilby (Texas instruments) - they developed the first integrated circuit of 10 components on 9 mm2. In 1959, Robert Norton Noyce (founder, Fairchild semiconductor) has improved this integrated circuit which has been developed by Jack St & Claire Kilby, in 1968- Noyce, Gordon E. Moore found Intel, in 1971- Ted Hoff (Intel) - has developed the first microprocessor (4004) consists of 2300 transistors on 9 mm2, since then the continuous improvement in technology has allowed for increased performance as predicted by Moore's law.

The rate of development of VLSI technology has historically progressed hand-in-hand with technology innovations. Many conventional VLSI systems as a result have engendered highly specialized technologies for their support. Most of the achievements in dense systems integration have derived from scaling in silicon VLSI process. As manufacturing has improved, it has become more cost-effective in many applications to replace a chip set with a monolithic IC: package costs are decreased, interconnect path shrink, and power loss in I/O drivers is reduced. As an example consider integrated circuit technology: the semi conductor industry Association predicts that, over the next 15 years, circuit technology will advance from the current four metallization layers up to seven layers. As a result, the phase of circuit testing in the design process is moving to the head as a major problem in VLSI design. In fact, Kenneth M, Thompson, vice president and general manager of the Technology, Manufacturing, and Engineering Group for Intel Corporation, states that a major falsehood of testing is that “we have made a lot progress in testing” in reality it is very difficult for testing to keep speed with semi conductor manufacturing technology.

Today's circuits are expected to perform a very broad range of functions as it also meets very high standards of performance, quality, and reliability. At the same time practical in terms of time and cost.

1.1 Analog & Digital Electronics

In science, technology, business, and, in fact, most other fields of endeavor, we are constantly dealing with quantities. In the most physical systems, quantities are measured, monitored, recorded, manipulated, arithmetically, observed. We should be able to represent the values efficiently and accurately when we deal with various quantities. There are basically two ways of representing the numerical value of quantities: analog and digital

1.2 Analog Electronics

Analogue/Analog electronics are those electronic systems with a continuously variable signal. In contrast, two different levels are usually taken in digital electronics signals. In analog representation a quantity is represented by a voltage, current, or meter movement that is comparative to the value of that quantity. Analog quantities such as those cited above have n important characteristic: they can vary over a continuous range of values.

1.3 Digital Electronics

In digital representation the quantities are represented not by proportional quantities but by symbols called digits. As an example, consider the digital watch, which provides the time of day in the form of decimal digits which represent hours and minutes (and sometimes seconds). As we know, the time of day changes continuously, but the digital watch reading does not change continuously; rather, it changes in steps of one per minute (or per second). In other words, this digital representation of the time of day changes in discrete steps, as compared with the representation of time provided by an analog watch, where the dial reading changes continuously.

Digital electronics that deals with “1s and 0s”, but that's a vast oversimplification of the in and outs of going digital. Digital electronics operates on the premise that all signals have two distinct levels. Certain voltages might be the levels near the power supply level and ground depending on the type of devices used. The logical meaning should not be mixed with the physical signal because the meaning of this signal level depends on the design of the circuit. Here are some common terms used in digital electronics:

  • Logical-refers to a signal or device in terms of its meaning, such as “TRUE” or “FALSE”
  • Physical-refers to a signal in terms of voltage or current or a device's physical characteristics
  • HIGH-the signal level with the greater voltage
  • LOW-the signal level with the lower voltage
  • TRUE or 1-the signal level that results from logic conditions being met
  • FALSE or 0-the signal level that results from logic conditions not being met
  • Active High-a HIGH signal indicates that a logical condition is occurring
  • Active Low-a LOW signal indicates that a logical condition is occurring
  • Truth Table-a table showing the logical operation of a device's outputs based on the device's inputs, such as the following table for an OR gate described as below

1.4 Number Systems

Digital logic may work with “1s and 0s”, but it combines them into several different groupings that form different number systems. Most of are familiar with the decimal system, of course. That's a base-10 system in which each digit represents a power of ten. There are some other number system representations,

  • Binary-base two (each bit represents a power of two), digits are 0 and 1, numbers are denoted with a ‘B' or ‘b' at the end, such as 01001101B (77 in the decimal system)
  • Hexadecimal or ‘Hex'-base 16 (each digit represents a power of 16), digits are 0 through 9 plus A-B-C-D-E-F representing 10-15, numbers are denoted with ‘0x' at the beginning or ‘h' at the end, such as 0x5A or 5Ah (90 in the decimal system) and require four binary bits each. A dollar sign preceding the number ($01BE) is sometimes used, as well.
  • Binary-coded decimal or BCD-a four-bit number similar to hexadecimal, except that the decimal value of the number is limited to 0-9.
  • Decimal-the usual number system. Decimal numbers are usually denoted by‘d' at the end, like 24d especially when they are combined with other numbering systems.
  • Octal-base eight (each digit represents a power of 8), digits are 0-7, and each requires three bits. It is rarely used in modern designs.

1.5 Digital Construction Techniques

Building digital circuits is somewhat easier than for analog circuits-there is fewer components and the devices tend to be in similarly sized packages. Connections are less susceptible to noise. The trade-off is that there can be many connections, so it is easy to make a mistake and harder to find them. There are a few visual clues as result of uniform packages.

1.5.1 Prototyping Boards

Prototypes is nothing but putting together some temporary circuits, or, as part of the exercises using a common workbench accessory known as a prototyping board. A typical board is shown in Figure 1 with a DIP packaged IC plugged into the board across the centre gap. This board contains sets of sockets in rows which are connected mutually for the component leads to be connected and plugged in without soldering. Apart from these outer edges of the board which contains long rows of sockets are also connected together so that they can be used for ground connections and power supply which are common to most components.

Assembling wiring layout on the prototype board should be carried out systematically, similar to the schematic diagram shown.

1.5.2 Reading Pin Connections

IC pins are almost always arranged so that pin 1 is in a corner or by an identifying mark on the IC body and the sequence increases in a counter-clockwise sequence looking down on the IC or “chip” as shown in Figure 1. In almost all DIP packages, the identifying mark is a dot in the corner marking pin 1. Both can be seen in the diagram, but on any given IC only one is expected to be utilised.

1.5.3 Powering Digital Logic

Where analog electronics is usually somewhat flexible in its power requirements and tolerant of variations in power supply voltage, digital logic is not nearly so carefree. Whatever logic family you choose, you will need to regulate the power supply voltages to at least ±5 percent, with adequate filter capacitors to filter out sharp sags or spikes.

To provide references to the internal electronics that sense the low or high voltages and also act on them as logic signals, the logic devices rely on stable power supply voltages. The device could be confused and also misinterpret the inputs if the device's ground voltage is kept away from 0 volts, which in turn causes temporary changes in the signals, popularly known as glitches. It is better to ensure that the power supply is very clean as the corresponding outcome can be very difficult to troubleshoot. A good technique is to connect a 10 ~ 100 µF electrolytic or tantalum capacitor and a 0.1 µF ceramic capacitor in parallel across the power supply connections on your prototyping board.

CHAPTER 2

2. REVIEW AND HISTORICAL ANALYSIS OF ITERATIVE CIRCUITS

As a background research, recent work on iterative circuits was investigated. In this section, seven main proposals from the literature will be reviewed. The first paper by Douglas Lewin published in (1974, pg.76,277), titled - Logic Design of Switching Circuits, in this book he states that quite often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. He stated that for a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth table method can lead to a laborious and inelegant solution.

2.1 Example:

A logic system could be decomposed into a number identical sub-systems, then if we could produce a design for the sub-system, or cell, the complete system could be synthesized by cascading these cells in series. The outputs of one cell form the inputs to the next one in the chain and so on, each cell is identical except for the first one (and frequently he last one) whose cell inputs must be deduced from the initial conditions. Each cell has external inputs as well as inputs from the preceding cell, which are distinguished by defining the outputs of a cell as its state. Figure 2.1 - Iterative Switching Systems

The second proposal which will b reviewed was presented by Fredrick J. Hil and Gerald R. Peterson published in (1981, pg. 570), titled - Introduction to Switching Theory and Logic Design, in this book, they discussed that iterative network is highly repetitive form of a combinational logic network. The repetitive structure make possible to describe the iterative networks utilizing techniques that already developed for sequential circuits, the author in this books he has limited his discussion to one dimensional iterative networks represented by the cascade or identical cells given in below figure. A typical cell with appropriate input and output notation is given in one more figure below (b). Now note the two distinct types of inputs, i.e., primary inputs from the outside world and secondary inputs from the previous cell in the cascade. And similarly and there are two types of outputs, i.e., primary to the outside world and secondary to the next cell in the cascade. The boundary inputs which are at the left of the cascade denoted by us in the same manner as secondary inputs. At some cases the inputs will be constant values.

A set of boundary inputs emerges from the right most cell in the cascade. although these outputs are to the outside world, they will be labelled in the same manners secondary outputs. The boundary outputs will be the only outputs of the iterative networks.

The third proposal by Barri Wilkinson with Raffic Makki, published in (1992, pg. 72-4) titled -digital design principles, in this book, they discussed about the design and problems of iterative circuits and stated that, there are some design problems which would require a large number of gates if designed as two level circuits. On approach i.e., is to divide each function into a number of identical sub functions which need be performed in sequence and the result of one sub function is used in the next sub function. A design based around the iterative approach is shown in below figure. There are seven logic circuit cells each cell accepts one code word digit and the output from the preceding cell. The cell produces one output, Z, which is a 1 whenever the number of 1's on the two inputs is odd. Hence successive outputs are a 1 when the number of 1's on inputs to that point is odd and the final output is a 1 only when the number of 1's in the whole code word is odd as required.

To create an iterative design, the number of cells and the number of data inputs to each cell need to be determined and also the number of different states that must be recognized by the cell. The number of different states will define the number of lines to the next cell (usually carrying binary encoded information).

The fourth proposal was reviewed by Douglas Lewin and David Protheroe published in (1992, pg. 369),titled - Design of Logic systems, in this book, according to them, iterative networks were widely used in the early days of switching systems when relays were the major means of realizing logic circuits. these technique fell into disuse when electronic logic gates widely available. It is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in the sense ‘regular' in that the overall function may be achieved by performing the same operation up to a sequence of a data bits. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with large numbers of parallel inputs.

The method is also directly applicable to the design of VLSI circuits and has the advantage of producing a modular structure based on a standard cell which may be optimized independently in terms of layout etc. Circuits containing any number of input variables can easily be constructed by simply extending the network with more cells. they examine the iterative circuits with some examples, although it is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in this sense ‘regular' in that the overall function may be achieved by performing the same operation upon a sequence of data bits.

Suppose a logic system could be decomposed into a number of identical subsystems; then if we could produce a design for the subsystem, or cell, the complete system could be synthesized by cascading these cells in series. Problem Reduced: this problem now has been reduced to that of specifying and designing the cell, rather than the complete system.

The fifth proposal presented by Brians Holdsworth published in (1993, pg. 165-166) titled - Digital Logic Design, stated that iterative networks widely used before the introduction of electronic gates are again of some interest to the logic designers as a result of developments in semiconductor technology. Moss pass transistors which are easily fabricated are used in LSI circuits where these LSI circuits require less space and allow higher packing densities. One of the major disadvantages of hard-wired iterative networks was the long propagation delays because of the time taken for signals to ripple through a chain of iterated cells. This is no longer such a significant disadvantage since of the length of the signal paths on an LSI chip are much reduced in comparison with the hard-wired connections between SSI and MSI circuits. However, the number of pass transistors that can be connected in series is limited because of signal degradation and it is necessary to provide intercell buffers to restore the original signal levels. One additional advantage is the structural simplicity and the identical nature of the cells which allows a more economical circuit layout.

A book proposed by Brians Holdsworth and R.C. Woods published in (2002, pg.135), titled - Digital Logic Design, in this book, the discussion on the structure has made and stated that iterative network consists of number of identical cells interconnected in a regular manners as shown in figure with the variables X1..........Xn are termed as primary input signals while the output signals termed as Z1...............Zn and another variable is also taken a1............an+1 are termed as secondary inputs or outputs depending on whether these signals are entering or leaving a cell. The structure of an iterative circuit may be defined as one which receives the incoming primary data in parallel form where each cell process the incoming primary and secondary data and generates a secondary output signal which is transmitted to the next cell. Secondary data is transmitted along the chain of cells and the time taken to reach steady state is determined by the delay times of the individual cells and their interconnections.

According to Larry L. Kinney, Charles .H and Roth. JR, published in (2004, pg.519) titled - Fundamentals of Logic design, in this book they discussed that many design procedures used for sequential circuits can be applied to the design of the iterative circuits, they consists of number of identical cells interconnected in a regular manner. Some operations such as binary addition, naturally lend themselves to realization with an iterative circuit because of the same operation is performed on each pair input bits. The regular structure of an iterative circuit makes it easier to fabricate in integrated circuit from than circuits with less regular structures, the simplest form of a iterative circuit consists of a linear array of combinational cells with signals between cells travelling in only one direction, each cell is a combinational circuit with one or more primary inputs and possibly one or more primary outputs. In addition, each cell has one or more secondary inputs and one or more secondary outputs. Then the produced signals carry information about the “state” of one cell to the next cell. The primary inputs to the cells are applied in parallel; that is, they are applied at the same time, the signals then propagate down the line of cells. Because the circuit is combinational, the time required for the circuit to reach a steady- state condition is determined only by the delay times of the gates in the cell. As soon as steady state is reached, the output may be read. Thus, the iterative circuits can function as a parallel- input, parallel-output device, in contrast with the sequential circuit in which the input and output are serial. One can think of the iterative circuits as receive its inputs as a sequence in time.

Example: parallel adder is an example of iterative circuits that has four identical cells. The serial adder uses the same full adder cell as he parallel adder, but it receives its inputs serially and stores the carry in a flip-flop instead of propagating it from cell to cell.

The final proposal was authored by JOHN F WAKERLY, published in (2006, pg. 459, 462, 756), titled - Digital Design Principles, in this book he quoted that, iterative circuits is a special type of combinational circuits, with the structure shown in below figure. This circuit contains n identical modules, each of which contains both primary inputs and primary outputs and cascading inputs and cascading outputs. The left most cascading inputs which is shown in below figure are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and these cascading output provides important information. Iterative circuits are well suited to problems that can be solved by a simple iterative algorithm:

  1. Set C0 to its initial value and set i=0
  2. Use Ci and Pli to determine the values of P0i and Ci+1.
  3. Increment i.
  4. If i<n, go to step 2.

In an iterative circuit, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

Each of the works reviewed makes an important contribution to improve the disadvantages and problems by iterative circuits, which is lead to improving the iterative circuits, thus it is appealing me to pursue an investigation on the sequential circuits for better understanding about the iterative circuits

CHAPTER 3

3. OVERVIEW OF DESIGN METHODS FOR ITERATIVE CIRCUITS

3.1 Iterative design

Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Changes and refinements are made, in the most recent iteration of a design, based on the results of testing. The quality and functionality design can be improved by this process. The interaction with the designed system is used as a research for informing and evolving a project, as successive versions in Iterative design.

3.2 Iterative Design Process

The iterative design process may be applied throughout the new product development process. In the early stages of development changes are easy and affordable to implement. In the iterative design process the first is to develop a prototype. In order to deliver non-biased opinions the prototype should be examined by a focus group which is not associated with the product. The Information gained from the focus group should be integrated and synthesized into next stage of iterative design. This particular process must be recurred until an acceptable level is achieved for the user. Figure 3.1 Iterative Design Process

3.3 Iterative Circuits

Iterative Circuits may be classified as,

  • Combinational Circuits
  • Sequential Circuits.

Combinatorial circuit generalized using gates has m inputs and n outputs. This circuit can be built as n different combinatorial circuits, apiece with exactly one output. If the entire n-output circuit is constructed at once then some important sharing of intermediate signals may take place. This sharing drastically decreases the number of gates needed to construct the circuit.

In some cases, we might be interested to minimize the number of transistors. In other, we might want a little delay, or we may need to reduce the power consumption. Normally a mixture of such type must be applied.

In combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. For a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth method can lead to laborious and inelegant solutions. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with a large number of parallel inputs, circuits specification is simplified and large variable problems reduced to a more tractable size, this method is directly applicable to the design of VLSI circuits. It should be pointed out though that the speed of the circuit is reduced because of the time required for the signals to propagate along the network; the number of interconnections is also considerably increased. In general, iterative design does not necessarily result in a more minimal circuit. As the advantage of producing a modular structure, circuits containing any number of input variables can be easily constructed by simple extending the networks with more cells. Suppose for example a logic system could be decomposed into number of identical sub subsystems, then if we would produce a design for the subsystem or a cell the complete system could be synthesized by cascading these cells in series. The problem has now been reduced to that of specifying and designing the cell, rather than the complex systems

In general, we define a synchronous sequential circuit, or just sequential circuit as a circuit with m inputs, n outputs, and a distinguished clock input. The description of the circuit is made with the help of a state table with latches and flip-flops are the building blocks of sequential circuits.

The definition of a sequential circuit has been simplified as the number of different states of the circuit is completely determined by the number of outputs. Hence, with these combinational circuits we are going to discuss a normal method that in the worst case may waste a large number of transistors For a sequential circuit with m inputs and n outputs, our method uses n D-flip-flops (one for each output), and a combinatorial circuit with m + n inputs and n outputs.

3.4 Iterative Circuits-Example

An iterative circuit is a special type of combinational circuit, with the structure shown, The above diagram represents the iterative circuits and this circuit contains ‘n' identical modules each of which has both primary inputs and outputs and cascading inputs and outputs. The left most cascading inputs are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and usually provide important information.

Quiet often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of truth table is inadequate. Iterative circuits are well suited to problems that can be solved by an algorithm i.e iterative algorithm

  • Set C0 to initial value and set i to 0.
  • Use Ci and Pli to determine the values of P0i and Ci+1.
  • Increment i.
  • If i <n, go to step 2.

In an iterative circuits, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

3.5 Improving the testability of Iterative Circuits

As stated by A.Rubio et al, (1989, pg.240-245), the increase in the complexity of the integrated circuits and the inherent increase in the cost of the test carried out on them are making it necessary to look for ways of improving the testability of iterative circuits.The integrated circuits structured as iteration of identical cells, because their regularity have a set of advantages that make them attractive for many applications. Among these advantages are their simplicity of design, because the structural repetition of the basic cell, manufacturing, test, fault tolerance and their interest for concurrent algorithmic structure implementation. Here in this journal we also study about the testability of iterative circuits the below figure illustrates the typical organization of an N-cells iterative unidimensional circuit (all the signals go from left to right); however the results can be extended to stable class of bilateral circuits.

The N cells have identical functionality. Every cell (i) has an external input yi and an internal input xi coming from the previous cell (i-1). Every cell generates a circuit output signal yi and an internal output xi that goes to the following cell (i+1).The following assumptions about these signals are considered below

  1. All the yi vectors are independent.
  2. Only the x1, y1, y2.............yn signals are directly controllable for test procedures.
  3. Only the y1, y2 ...yn signals are directly observable.
  4. The xi and ^xi signals are called the states (input and output states respectively) of the ith-cell and are not directly controllable (except xi) neither observable (except xn).

Kautz gives the condition of the basic cell functionality that warrants the exhaustive testing of each of the cells of the array. These conditions assure the controllability and observability of the states. In circuits that verify these conditions the length of the test increase linearly with the number of cells of the array with a resulting length that is inferior to the corresponding length for other implementation structures.

A fundamental contribution to the easy testability of iterative circuits was made by Freidman. In his work the concept of C-testability is introduced; an iterative circuit is C-testable if a cell-level exhaustive test with a constant length can be generated. This means the length is independent of the number of cells composing the array (N). The results are generalised in several ways. In all these works it is assumed that there is only one faulty cell in the array. Cell level stuck-at (single or multiple) and truth-table fault models are considered. The set T of test vectors of the basic cell is formed by a sequence (what ever the order may be) of input vectors to the cell.

Kautz proposed the cell fault model (CFM) which was adopted my most researchers in testing ILAs. As assumed by CFM only one cell can be faulty at a time. As long as the cell remains combinational, the output functions of the faulty cell could be affected by the fault. In order to test ILA under CFM every cell should be supplied with all its input combinations. In Addition to this, the output of the faulty cell should be propagated to some primary output of the ILA. Friedman introduced c-testability. An ILA is C-testable if it can be tested with a number of test vectors which are independent of the size of the ILA.

The target of research in ILA testing was the derivation of necessary and sufficient conditions for many types of ILAs (one dimensional with or without vertical outputs, two-dimensional, unilateral, bilateral) to be C-testable. The derivations of these conditions were based on the study of flow table of the basic cells of the array. In the case of an ILA which is not C-testable modifications to its flow table (and therefore as to its internal structure) and/or modifications to the overall structure of the array, were proposed to make it C-testable. Otherwise, a test set with length usually proportional to the ILA size was derived (linear testability). In most cases modifications to the internal structure of the cells and/or the overall structure of the ILA increase the area occupied by the ILA and also affect it performance.

ILA testing considering sequential faults has been studied, sequential fault detection in ripple carry adders was considered with the target to construct a shortest length sequence. In sufficient conditions for testing one dimensional ILAs for sequential faults were given. It was not shown that whenever the function of basic cell of an ILA is bijective it can be tested with constant number of tests for sequential faults. To construct such a test set like this a procedure was also introduced.

The following considerations from the basis of our work. Many of the computer aided design tools are based on standard cells libraries. While testing an ILA, the best that can be done is to test each of its cells exhaustively with respect to the CFM. The derivation of such a test set(either C-test set or linear-test set, depending on whether the conditions for C-testability for the particular type of the ILA are satisfied or not) has been extensively studied. However, the functional verification of each cell under CFM is not adequate in CMOS ILAs since it does not detect sequential faults like stuck- open and bridging faults. The existence of such faults that transform the single faulty cell into a sequential one, crates the need for the application of sequences of cell input patterns to each cell, in order to detect such faults. In the case of standard cells libraries where the physical design of a cell is not given, the test of a cell for realistic faults should have been derived by the library designer and be provided along with every cell.

The above work is based on the fact that a complete test set for either a one-dimensional or a two-dimensional ILA, with respect to the CFM, can be constructed using any of the procedures proposed in the open literature, and gives a method to transform this test set into a complete test set for more realistic fault models (for example stuck-open faults in CMOS ILAs). Conditions are given so that any useful properties, such as C-testability or linear testability are preserved. The author also considered the Optimization of the length of the resulting test set. The results of this work can be used in various types of ILAs and in cells with various functions (not only bijective ones). Extensions can be made for the general case of n-pattern testing of each cell.

CHAPTER 4

4. DESIGN OF ITERATIVE BUILDING BLOCKS

4.1 Binary Arithmetic-Circuits

Binary arithmetic is a combinatorial problem. For the binary arithmetic, it may seem insignificant to use the methods which we have already seen for designing combinatorial circuits to obtain circuits.But a problem persists; the general method to create these kinds of circuits would use too many gates. We must look for different routes.

4.2 Adder Circuits

In electronics, Addition or adder or summer is the most commonly performed arithmetic operations in digital systems. An adder combines two arithmetic operands using the addition rules that performs addition of numbers. In present day computers, adders reside in the arithmetic logic unit (ALU) where other operations are performed. For many numerical representations, such as Binary-coded decimal or excess-3, adders can be constructed; basic adders function on binary numbers. It is trifling to vary an adder into an adder-substracter, where twos complement or ones complement can be used to represent negative numbers.

4.3 Types of adders

Adder circuits can be classified as,

  • A Half Adder
  • A Full Adder

A half adder can add two bits. It has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. C is the AND of A and B and S is the two-bit XOR of A and B. Fundamentally, the half adder output would be the sum of two one-bit numbers, with C being the most important of these two outputs. Figure 4.1 - Half Adder

A full adder is a combinatorial circuit (or actually two combinatorial circuits) and its function is to add two binary digits plus a carry from the previous position that gives a two bit result of three inputs and two outputs, as well as the normal output and the carry to the next position. In this particular instance we have used uneven labels, x and y for the inputs, c-in for the carry-in, c-out for the carry-out, and s for the sum output.

A full adder can be trivially built using our ordinary design methods for combinatorial circuits. Here is the resulting circuit diagram:

The next step is to combine a series of such full adders into a circuit that can add (say) two 8-bit positive numbers. We proceed by linking the carry-out obtained from one full adder to the carry-in of the full adder which is to its immediate left. The full adder on the right most takes a 0 on its carry-in. Figure 4.3 - Series of Full Adder

For the i-th binary position, we have used subscript i.

The depth of this circuit is very large. With the help of inputs of position 0, the output and carry from position 7 is determined in part. With a corresponding delay as a result, the signal must traverse all the full adders.

Intermediate solutions can be seen between the two extreme ones. (i.e., an iterative combinational circuit with one-bit adders as elements and combinatorial circuit for the entire 32-bit adder). An 8-bit adder can be build as a normal two-level combinatorial circuit and four 8-bit adders from a 32-bit adder. An 8-bit adder can trivially be build from 65536 (216) and-gates, and a giant 65536-input or-gate.

4.4 Coding

4.4.1 Program for 16 bit Adder

4.5 Binary Subtraction

Binary Subtraction can bee done, notice that in order to compute the expression x - y, instead can compute the expression x + -y. By inverting all the bits and adding 1 we can negate a number, this can be learnt from the above section on binary arithmetic. Thus, we can compute the expression as x + inv(y) + 1. To add 1, we can utilise an unused carry-in signal to position 0. Adding 1 on this input adds one to the result. The complete circuit with addition and subtraction looks like this:

4.6 Binary multiplication and Division

Binary multiplication is even harder than binary addition. We don't have a good iterative combinatorial circuit, in this instance we have to make use of heavier artillery. The key is to use a sequential circuit which computes one addition for every clock pulse.

4.7 Binary Comparator

The purpose of a two-bit binary comparator is quite simple, which has a comparison unit for receiving a first bit and a second bit to thereby compare the first bit with the second bit; and an enable unit for outputting a comparison result of the comparison unit as an output of the 2-bit binary comparator according to an enable signal.. It determines whether one 2-bit input number is larger than, equal to, or less than the other. The first step in the creation of the comparator circuit is the generation of a truth table that lists the input variables, their possible values, and the resulting outputs for each of those values. The truth table used for this experiment is shown in below Table 4 - Binary Comparator

4.8 Parity Generation or Even-Odd Detection

Parity bits are extra signals which are added to a data word to enable error checking. Parity bits are of two types, even and odd. Logic 1 will be produced as output by the even parity generator if the data word has an odd number of 1's. The output of the parity generator will be low if the data words have even number of 1's. By concatenating the Parity bit to the data word, a word will be formed which always has an even number of ones i.e. has even parity. Parity is used in memory systems and modem lines. The data is said to be corrupted if the data word sent is even parity and received as odd parity, in this case the data is resent. The name implies that Odd Parity generator operation is similar but it produces odd parity. For various 8-bit data words the parity generator outputs can be seen in the table shown below

4.9 Iterative Approach

A circuit which could be used to generate even parity for 4-bit data is shown on the left. If we follow this approach is for 16 or 32 bit data busses, the follow-on circuit will be hefty, with complex interconnect. A linear architecture for VLSI implementation and use is shown below.

A one-bit cell has been designed so that this can be used to form an n-bit circuit in this approach. Each cell in the below circuit is identical with one bit of the data word fed in from below, and a 1,0 pair fed in from the left hand side. A 1,0 pair passes into the left hand side of the next cell from the right hand edge of the cell. The 1,0 pair pass through the cell unchanged, if the data input is low. The 1,0 pair is swapped, if the input is high (i.e. becomes 0,1).

Thus X=0 and Y=1 when all data inputs are low.

If one data input is high then X=1 and Y = 0

If two data inputs are high then X=0, Y=1 and so on

i.e., for an odd number of 1s: X=1, for an even number of 1s: X=0.

4.9.1 Program

CHAPTER 5

5. NEED FOR TESTING

As the density of VLSI products increases, their testing becomes more difficult and costly. Generating test patterns has shifted from a deterministic approach, in which a testing pattern is generated automatically based on a fault model and an algorithm, to a random selection of test signals. While in real estate the refrain is “Location!” the comparable advice in IC design should be “Testing! Testing! Testing!”. No matter whether deterministic or random generation of testing patterns is used, the testing pattern applied to the VLSI chips can no longer cover all possible defects. Consider the manufacturing processes for VLSI chips as shown in Fig. 1. Two kinds of cost can incur with the test process: the cost of testing and the cost of accepting an imperfect chip. The first cost is a function of the time spent on testing or, equivalently, the number of test patterns applied to the chip. This cost will add up to the cost of the chips. The second cost represents the fact that, when a defective chip has been passed as good, its failure may become very costly after being embedded in its application. An optimal testing strategy should trade off both costs and determine an adequate test length (in terms of testing period or number of test patterns).

Apart from the cost, two factors need to be considered when determining the test lengths. The first is the production yield, which is the probability that a product is functionally correct at the end of the manufacturing process. If the yield is high, we may not need to test extensively since most chips tested will be “good,” and vice versa. The other factor to be considered is the coverage function of the test process. The coverage function is defined as the probability of detecting a defective chip given that it has been tested for a particular duration or a given number of test patterns. If we assume that all possible defects can be detected by the test process, the coverage function of the test process can be regarded as a probability distribution function of the detection time given the chip under test is bad. Thus, by investigating the density function or probability mass function, we should be able to calculate the marginal gain in detection if the test continues. In general, the coverage function of a test process can be obtained through theoretical analysis or experiments on simulated fault models. With a given production yield, the fault coverage requirement to attain a specified defect level, which is defined as the probability of having a “bad” chip among all chips passed by a test process While most problems in VLSI design has been reduced to algorithm in readily available software, the responsibilities for various levels of testing and testing methodology can be significant burden on the designer.

The yield of a particular IC was the number of good die divided by the total number of die per wafer. Due to the complexity of the manufacturing process not all die on a wafer correctly operate. Small imperfections in starting material, processing steps, or in photomasking may result in bridged connections or missing features. It is the aim of a test procedure to determine which die are good and should be used in end systems.

  • Testing a die can occur:
  • At the wafer level
  • At the packaged level
  • At the board level
  • At the system level
  • In the field

By detecting a malfunctioning chip at an earlier level, the manufacturing cost may be kept low. For instance, the approximate cost to a company of detecting a fault at the above level is:

  • Wafer $0.01- $.1
  • Packaged-chip $0.10-$1
  • Board $1-$10
  • System $10-$100
  • Field $100-$1000

Obviously, if faults can be detected at the wafer level, the cost of manufacturing is kept the lowest. In some circumstances, the cost to develop adequate tests at the wafer level, mixed signal requirements or speed considerations may require that further testing be done at the packaged-chip level or the board level. A component vendor can only test the wafer or chip level. Special systems, such as satellite-borne electronics, might be tested exhaustively at the system level.

Tests may fall into two main categories. The first set of tests verifies that the chip performs its intended function; that is, that it performs a digital filtering function, acts as a microprocessor, or communicates using a particular protocol. In other words, these tests assert that all the gates in the chip, acting in concert, achieve a desired function. These tests are usually used early in the design cycle to verify the functionality of the circuit. These will be called functionality tests. The second set of tests verifies that every gate and register in the chip functions correctly. These tests are used after the chip is manufactured to verify that the silicon in intact. They will be called manufacturing tests. In many cases these two sets of tests may be one and the same, although the natural flow of design usually has a designer considering function before manufacturing concerns.

5.1 Manufacturing Test Principles

A critical factor in all LSI and VLSI design is the need to incorporate methods of testing circuits. This task should proceed concurrently with any architectural considerations and not be left until fabricated parts are available.

Figure 5.1(a) shows a combinational circuit with n-inputs. To test this circuit exhaustively a sequence of 2n inputs must be applied and observed to fully exercise the circuit. This combinational circuit is converted to a sequential circuit with addition of m-storage registers, as shown in figure 5.1b the state of the circuit is determined by the inputs and the previous state. A minimum of 2(n+m) test vectors must be applied to exhaustively test the circuit. Clearly, this is an important area of design that has to be well understood.

5.2 Optimal Testing

With the increased complexity of VLSI circuits, testing has become more costly and time-consuming. The testing strategy design, involves the cost of testing trade-off and the penalty of passing a bad chip as good. Initially most favourable testing period derived, assuming that the production yield is better known. Since the production yield might be unknown, the finest sequential testing strategy which estimates ongoing testing results from the yield based, which in turn determines the most favourable testing period, is next developed. Lastly, the finest sequential testing strategy for batches in which N chips are tested concurrently is on hand. These results are of use if the yield stays stable or changes from one to another manufacturing run.

CHAPTER 6

6. DESIGN AND IMPLEMENTATION OF CIRCUITS ON FPGA

6.1 VLSI Design

VLSI Design and implementation can be classified based on the Prototype,

  • ASIC
  • FPGA

6.2 Implementation Styles

Although implementations of electronic systems can span a number of levels of physical hierarchy, in reality implementations in semiconductor components and boards dominate. Most of designing leads to PCBs populated with standard components. Progress in component and board technology has spawned particular implementation styles.

The designing and implementation is described through a flow chart below, this flow chart clearly separates a technology -independent design phase from a technology- independent implementation phase. It also suggests an oversimplified view of the world in which a design is implemented in a single FPGA, gate array, cell- based ASIC, and so forth.

The simplified view of the process tends to be presented dogmatically, since it provides protection against designs being taken hostage by implementation style, and hence by supplier. But recognizably great designs exhibit a nebulous elegance due to some technology dependence, engineers has usually exploited some technology- specific feature to meet or exceed required measures in an economic way. This implifies information flow across the interface from implementation to design whether by formal or informal means. It may be in the form of design rules, recommended practice, and so on, but is more likely to consist of knowledge from design experience and practice.

6.3 ASIC- Application Specific Integrated Circuits

An application-specific integrated circuit (ASIC) is an integrated circuit (IC) tailored for a particular use instead of a general-purpose use. A very good example is a chip designed to run a cell phone is an ASIC. The 7400 or the 4000 series, are application specific standard products (ASSPs), are like intermediate between ASICs and industry standard integrated circuits.Over the years, the maximum functionality (Complexity) possible in an ASIC has increased from 5,000 gates to over 100 million. 21st century ASICs include ROM, RAM, EEPROM, entire 32-bit processors, Flash and other hefty constructed blocks. This kind of ASIC is often called a SOC (system-on-a-chip). The functionality of ASICs is described by the designers using hardware description language (HDL), such as Verilog or VHDL.

6.4 Reviews on FPGA

In chapter 6, A investigation was made on design and implementation styles on FPGA with VLSI designing and also has a quick look on ASIC (application specific integrated circuits) as the discussion basically depends on FPGA, so the discussion is main targeted for FPGA with a relevant example section 6.5.2, corresponding to that benefits of FPGA technology is discussed in section 6.6, and the development software and testing were included in this chapter with the sections 6.8 & 6.9

In the below section, an investigation on historical back ground of FPGA and its architecture and a concept with distinctive new possibilities are discussed by using a VLSI technology.

6.4.1 Historical background

The origins of much today's FPGA research are in work in the late 1960s and early 1970s on cellular arrays. This work was mainly concerned with improving the fault tolerance of logic structures, thus allowing larger silicon areas or whole wafers to be used to implement logic. The method proposed was to cover the wafer with a regular array of restructurable cells capable of implementing general logic functions. Both fuse- and flip-flop- programmed structures were proposed and investigated. Important early work in this area was done my Manning [Mann77], Minnick [Minn64], Wahlstrom [Wahl67], and Shoup [Shoup70]. A good survey article of the early research appears in [Minn67].

As background research on FPGA, the first proposal was reviewed by John V. Oldfield and Richard C. Dorf published in (1995, pg. 95) in this book, they discussed that the field programmable gate array (FPGA) is a relatively new type of component for the construction of electronic system, particularly those using digital, or more correctly logical, circuit principles. This FPGA consists of an array of functional blocks along with an interconnection network and as the name itself implies, its configuration can be determined in the field, that is, at the point of application. The specific function of each block and the connections between blocks are prescribed by the user. The FPGA market has a wide range of architectures and alternative ways of controlling configurations. The FPGA take its place in the continuing evolution of very-large scale integrated (VLSI) circuit technology toward denser and faster circuits. Although present-day FPGA components have only a few thousand or tens of thousands of gates, future ones will have hundreds of thousands of eventually millions of gates,

We may except to see the development of pipelined and parallel processing in which the system configuration changes dynamically. An image processing system could divide computations into parallel tasks for a large number of processors realized within the same FPGA chip. While such schemes may seem futuristic, they suggest that the FPGA should not be thought of as just a VLSI component that replaces others, but as a concept with distinctive new possibilities. In the present day realizations, the FPGA has significant advantages for the development of prototype systems and the early introduction to the market. The benefits are similar to those associated with the introduction of the microprocessor in the late 1970s, such as programmability and adaptability, but with additional advantages in speed, compactness, and design protection, as for the educational purposes designing with FPGAs requires computer assistance at almost every stage of designing, including detailed specification, simulation, placement, and routing, and calls for an overall systematic design methodology. And this is an important aspect in the education for the future engineers, for the future improvement we need to improve both the performance and quality of the systems they produce. At the same time, the increase in designer productivity makes it possible to consider alternative implementations at higher level than previously, and should not cramp the creativity of a system designer.

The second proposal was presented by Charles E.stroud and Nur A. Touba published in (2008,pg.549-550) titled - system on chip test architectures, in the e- books the authors mentioned that, since mid-1980s, field programmable gate arrays(FPGAs) have become a dominant implementation medium for digital systems. At that time, FPGAs developed into complex units and ability to execute system functions has improved from a few thousand to tens of millions of logic gates. The largest FPGAs currently available exceed billion transistors. FPGAs can be programmed and configured to perform any digital logic function which provides an attractive choice, not only for fast time-to-market systems but also for rapid prototyping of digital systems.

The final proposal by Pong p chu, published in (2008, pg.11) titled - FPGA prototyping by VHDL examples : Xilinx Spartan-3 version, in this e-book, he quoted that field programmable gate array(FPGA) is a logic device that contains two-dimensional array of general logic cells and programmable switches. A logic cell can be programmed to perform a simple function, and a programmable switch can be customized to provide interconnections among the logic cells. A custom design can be implemented by specifying the function of each logic cell and selectively setting the connection of each programmable switch. Once the design and synthesis is completed, we can use a simple adaptor cable to download the desired logic cell and switch configuration to FPGA device and obtain the custom circuit. Since this process can be done “in the field” rather than “in fabrication facility (fab),” the device is known as field programmable

6.5 Architecture

According to R.C. Seals and G.F. Whapshott (1997), for FPGAs, the variety of architectures is larger as each manufacturer develops concepts for particular niche markets and this, coupled with the non- deterministic nature of the timing for place and route, makes them more difficult to incorporate into designs. Due to the considerable differences in internal architecture, it is difficult to make appropriate comparisons, so a number of different architectures will be considered and their specific properties outlined. The development tools available range from low-cost pc based package to costly high-end workstation packages and there is an encouraging move towards standardisation through two avenues that of standard formats for intermediate stage files or through standardised design languages such as VHDL. Because FPGAs are non-deterministic in the propagation delays of synthesized designs, extensive use is made of simulators to verify that the design and the place and route implement the specified design. Simulation can be considered to take place at two levels, that of functional simulation which confirms that the design is correct and then gate level simulation, to confirm that the design after place and route still implement the design correctly. The sequence of design and simulation is as follows. The initial design is followed by functional simulation and then place and routing. A further simulation at the gate level follows and, if all is correct, the device will be programmed. Any faults identified are corrected before proceeding to the next stage. The use of FPGAs in digital system design is very much a ‘hands-on' process and the quickest approach is to tackle a non-trivial design.

FPGAs generally consists of two-dimensional array of logical blocks (PLBs) interconnected by a programmable routing network with programmable input/output (I/O) cells at the periphery of the device, as illustrated in below diagram.

In some FPGAs, two I/O cells can be combined to support differential pair I/O standards. A trend in FPGAs is to include cores for specialized functions such as single-port and dual-port RAMs, first-in-first-out (FIFO) memories, multipliers, and DSPs. Within any given FPGA, all memory cores are usually of the same size in terms of the total number of memory bits, but each memory cores in the arrays is individually programmable.

6.5.1 Example

RAM cores can be individually configured to select the mode of operation (single-port, dual-port, FIFO, etc.), architecture in terms of number of addresses locations versus number of data bits per address location, active edge of clocks, and active level of control signals (clock enables, set/reset, etc.), as well as options registered input and output signals. Another trend is the incorporation of one or more embedded processor cores. These processor cores range in size from 8-bit to 64-bit architectures and can first FPGA in the chain operates in master mode to supply the configuration clock (CCLK) to read the configuration data from the PROM are passed on to the next FPGA in the daisy chain by connecting the configuration data input (Din) to the data output (Dout). Many FPGAs provide serial and parallel configuration interfaces for both master and slave configuration modes. The parallel interface is used to speed up the configuration process by reducing the download time via parallel data transfer. The most common parallel interface use 8-bit or 32-bit configuration data buses.

This facilitates configuration of FPGAs operating in slave mode by processor which can also perform reconfiguration r dynamic partial reconfiguration of the FPGAs once the system is operational. To avoid a large number of pins dedicated to a parallel configuration interface, these parallel data bus pins can be optionally reused for normal systems functions signals once the device is programmed. The IEEE standard boundary scan interface, in conjunction with associated instructions to access the configuration memory, an also be used for slave serial configuration in many FPGAs.

6.6 Benefits of FPGA Technology

  1. Performance
  2. Time to Market
  3. Cost
  4. Reliability
  5. Long-Term Maintenance

6.6.1 Performance - Taking advantage of hardware parallelism, FPGAs exceed the computing power of digital signal processors (DSPs) by breaking the paradigm of sequential execution and accomplishing more per clock cycle. According to BDTI, a benchmarking firm and a noted analyst, show how FPGAs can convey many times the processing control per dollar of a DSP solution in a few applications. Managing inputs and outputs (I/O) at the hardware level provides faster response times and specialized functionality to closely match application requirements.

6.6.2 Time to market - FPGA technology offers flexibility and rapid prototyping capabilities in the face of increased time-to-market concerns. Instead of following the long fabrication process of conventional ASIC design, the idea can be tested and verified on the hardware. Also can then implement incremental changes and iterate on an FPGA design within hours instead of weeks. With different types of I/O already connected to a user-programmable FPGA chip, Commercial off-the-shelf (COTS) hardware is also accessible. With the availability of high-level software tools, the learning curve with layers of abstraction decreases and it includes valuable prebuilt functions for advanced signal and control processing.

6.6.3 Cost - The nonrecurring engineering (NRE) expense of custom ASIC design far exceeds that of FPGA-based hardware solutions. It is easy to justify large initial investment in ASICs, for OEMs shipping thousands of chips per year, but for the tens to hundreds of systems in development many end users need custom hardware functionality. There are no long lead times for assembly or no cost of fabrication because of the very nature of programmable silicon. With changing system requirements over time, the cost to respin an ASIC is very high when compared to making incremental changes to FPGA designs

6.6.4 Reliability - While software tools provide the programming environment, FPGA circuitry is truly a “hard” implementation of program execution. To share resources among multiple processes and help schedule tasks, processor-based systems involve several layers of abstraction. The operating system manages processor and memory bandwidth and the driver layer controls hardware resources. For any given processor core, processor-based systems are continually at risk of time-critical tasks pre-empting one another, and only one instruction can execute at a time. FPGAs, which are not making use of operating systems, reduce reliability concerns with deterministic hardware dedicated to every task and true parallel execution.

6.6.5 Long-term maintenance - As mentioned earlier, FPGA chips are field-upgradable and do not require the time and expense involved with ASIC redesign. Digital communication protocols can change over time, and ASIC-based interfaces might root maintenance and forward compatibility challenges. As FPGA chips are reconfigurable, they able to keep up with future modifications which may be essential. With time the system matures and functional enhancements can be made without modification of the board layout or redesigning the hardware..

6.7 Choosing an FPGA

When examining the specifications of an FPGA chip, note that they are often divided into configurable logic blocks like slices or logic cells, fixed function logic such as multipliers, and memory resources like embedded block RAM. These FPGA chips are typically the most important when comparing and selecting FPGAs for a specific application.

Table 1 shows resource specifications used to compare FPGA chips within various Xilinx families. The number of individual components inside an FPGA cannot be used for comparison of size with the ASIC technology. Xilinx could not specify the number of equivalent system gates for the new Virtex-5 family due to one of these reasons.

6.8 Development Software

As FPGAs are much more complex than PLDs, the software used during the development process is correspondingly more complex and has to perform additional functions. The designs are larger and more complex and are usually described in abstract functions using hardware description language or schematic diagrams. In comparisons, many PLD design can be described adequately using Boolean equations.

The design development process can be considered to consist of a sequence of six steps of decreasing abstraction, as listed below:

  1. Design specification
  2. Conversion of the specification into a logical consistent description suitable for entry into a CAD system.
  3. Compiling the entered design.
  4. Simulating the design.
  5. Programming the target device.
  6. System co

    To export a reference to this article please select a referencing stye below:

    Reference Copied to Clipboard.
    Reference Copied to Clipboard.
    Reference Copied to Clipboard.
    Reference Copied to Clipboard.
    Reference Copied to Clipboard.
    Reference Copied to Clipboard.
    Reference Copied to Clipboard.

Request Removal

If you are the original writer of this dissertation and no longer wish to have the dissertation published on the UK Essays website then please click on the link below to request removal:


More from UK Essays

Get help with your dissertation
Find out more