This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers.



The VLSI was an important pioneer in the electronic design automation industry. The “lambda-based” design style which was advocated by carver mead and Lynn Conway offered a refined packages of tools.. VLSI became the early hawker of standard cell (cell-based technology). Rapid advancement in VLSI technology has lead to a new paradigm in designing integrated circuits where a system-on-a-chip (SOC) is constructed based on predesigned and pre-verified cores such as CPUs, digital signals processors, and RAMs. Testing these cores requires a large amount of test data which is continuously increasing with the rapid increase in the complexity of SOC. Test compression and compaction techniques are widely used to reduce the storage data and test time by reducing the size of the test data.

The Very large scale integration design or manufacturing of extremely small uses complex circuitry of modified semiconductor material.

In 1959- jack St. Claire Kilby (Texas instruments) - they developed the first integrated circuit of 10 components on 9 mm2. In 1959, Robert Norton Noyce (founder, Fairchild semiconductor) has improved this integrated circuit which has been developed by Jack St & Claire Kilby, in 1968- Noyce, Gordon E. Moore found Intel, in 1971- Ted Hoff (Intel) - has developed the first microprocessor (4004) consists of 2300 transistors on 9 mm2, since then the continuous improvement in technology has allowed for increased performance as predicted by Moore's law.

The rate of development of VLSI technology has historically progressed hand-in-hand with technology innovations. Many conventional VLSI systems as a result have engendered highly specialized technologies for their support. Most of the achievements in dense systems integration have derived from scaling in silicon VLSI process. As manufacturing has improved, it has become more cost-effective in many applications to replace a chip set with a monolithic IC: package costs are decreased, interconnect path shrink, and power loss in I/O drivers is reduced. As an example consider integrated circuit technology: the semi conductor industry Association predicts that, over the next 15 years, circuit technology will advance from the current four metallization layers up to seven layers. As a result, the phase of circuit testing in the design process is moving to the head as a major problem in VLSI design. In fact, Kenneth M, Thompson, vice president and general manager of the Technology, Manufacturing, and Engineering Group for Intel Corporation, states that a major falsehood of testing is that “we have made a lot progress in testing” in reality it is very difficult for testing to keep speed with semi conductor manufacturing technology.

Today's circuits are expected to perform a very broad range of functions as it also meets very high standards of performance, quality, and reliability. At the same time practical in terms of time and cost.

1.1 Analog & Digital Electronics

In science, technology, business, and, in fact, most other fields of endeavor, we are constantly dealing with quantities. In the most physical systems, quantities are measured, monitored, recorded, manipulated, arithmetically, observed. We should be able to represent the values efficiently and accurately when we deal with various quantities. There are basically two ways of representing the numerical value of quantities: analog and digital

1.2 Analog Electronics

Analogue/Analog electronics are those electronic systems with a continuously variable signal. In contrast, two different levels are usually taken in digital electronics signals. In analog representation a quantity is represented by a voltage, current, or meter movement that is comparative to the value of that quantity. Analog quantities such as those cited above have n important characteristic: they can vary over a continuous range of values.

1.3 Digital Electronics

In digital representation the quantities are represented not by proportional quantities but by symbols called digits. As an example, consider the digital watch, which provides the time of day in the form of decimal digits which represent hours and minutes (and sometimes seconds). As we know, the time of day changes continuously, but the digital watch reading does not change continuously; rather, it changes in steps of one per minute (or per second). In other words, this digital representation of the time of day changes in discrete steps, as compared with the representation of time provided by an analog watch, where the dial reading changes continuously.

Digital electronics that deals with “1s and 0s”, but that's a vast oversimplification of the in and outs of going digital. Digital electronics operates on the premise that all signals have two distinct levels. Certain voltages might be the levels near the power supply level and ground depending on the type of devices used. The logical meaning should not be mixed with the physical signal because the meaning of this signal level depends on the design of the circuit. Here are some common terms used in digital electronics:

  • Logical-refers to a signal or device in terms of its meaning, such as “TRUE” or “FALSE”
  • Physical-refers to a signal in terms of voltage or current or a device's physical characteristics
  • HIGH-the signal level with the greater voltage
  • LOW-the signal level with the lower voltage
  • TRUE or 1-the signal level that results from logic conditions being met
  • FALSE or 0-the signal level that results from logic conditions not being met
  • Active High-a HIGH signal indicates that a logical condition is occurring
  • Active Low-a LOW signal indicates that a logical condition is occurring
  • Truth Table-a table showing the logical operation of a device's outputs based on the device's inputs, such as the following table for an OR gate described as below

1.4 Number Systems

Digital logic may work with “1s and 0s”, but it combines them into several different groupings that form different number systems. Most of are familiar with the decimal system, of course. That's a base-10 system in which each digit represents a power of ten. There are some other number system representations,

  • Binary-base two (each bit represents a power of two), digits are 0 and 1, numbers are denoted with a ‘B' or ‘b' at the end, such as 01001101B (77 in the decimal system)
  • Hexadecimal or ‘Hex'-base 16 (each digit represents a power of 16), digits are 0 through 9 plus A-B-C-D-E-F representing 10-15, numbers are denoted with ‘0x' at the beginning or ‘h' at the end, such as 0x5A or 5Ah (90 in the decimal system) and require four binary bits each. A dollar sign preceding the number ($01BE) is sometimes used, as well.
  • Binary-coded decimal or BCD-a four-bit number similar to hexadecimal, except that the decimal value of the number is limited to 0-9.
  • Decimal-the usual number system. Decimal numbers are usually denoted by‘d' at the end, like 24d especially when they are combined with other numbering systems.
  • Octal-base eight (each digit represents a power of 8), digits are 0-7, and each requires three bits. It is rarely used in modern designs.

1.5 Digital Construction Techniques

Building digital circuits is somewhat easier than for analog circuits-there is fewer components and the devices tend to be in similarly sized packages. Connections are less susceptible to noise. The trade-off is that there can be many connections, so it is easy to make a mistake and harder to find them. There are a few visual clues as result of uniform packages.

1.5.1 Prototyping Boards

Prototypes is nothing but putting together some temporary circuits, or, as part of the exercises using a common workbench accessory known as a prototyping board. A typical board is shown in Figure 1 with a DIP packaged IC plugged into the board across the centre gap. This board contains sets of sockets in rows which are connected mutually for the component leads to be connected and plugged in without soldering. Apart from these outer edges of the board which contains long rows of sockets are also connected together so that they can be used for ground connections and power supply which are common to most components.

Assembling wiring layout on the prototype board should be carried out systematically, similar to the schematic diagram shown.

1.5.2 Reading Pin Connections

IC pins are almost always arranged so that pin 1 is in a corner or by an identifying mark on the IC body and the sequence increases in a counter-clockwise sequence looking down on the IC or “chip” as shown in Figure 1. In almost all DIP packages, the identifying mark is a dot in the corner marking pin 1. Both can be seen in the diagram, but on any given IC only one is expected to be utilised.

1.5.3 Powering Digital Logic

Where analog electronics is usually somewhat flexible in its power requirements and tolerant of variations in power supply voltage, digital logic is not nearly so carefree. Whatever logic family you choose, you will need to regulate the power supply voltages to at least ±5 percent, with adequate filter capacitors to filter out sharp sags or spikes.

To provide references to the internal electronics that sense the low or high voltages and also act on them as logic signals, the logic devices rely on stable power supply voltages. The device could be confused and also misinterpret the inputs if the device's ground voltage is kept away from 0 volts, which in turn causes temporary changes in the signals, popularly known as glitches. It is better to ensure that the power supply is very clean as the corresponding outcome can be very difficult to troubleshoot. A good technique is to connect a 10 ~ 100 µF electrolytic or tantalum capacitor and a 0.1 µF ceramic capacitor in parallel across the power supply connections on your prototyping board.



As a background research, recent work on iterative circuits was investigated. In this section, seven main proposals from the literature will be reviewed. The first paper by Douglas Lewin published in (1974, pg.76,277), titled - Logic Design of Switching Circuits, in this book he states that quite often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. He stated that for a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth table method can lead to a laborious and inelegant solution.

2.1 Example:

A logic system could be decomposed into a number identical sub-systems, then if we could produce a design for the sub-system, or cell, the complete system could be synthesized by cascading these cells in series. The outputs of one cell form the inputs to the next one in the chain and so on, each cell is identical except for the first one (and frequently he last one) whose cell inputs must be deduced from the initial conditions. Each cell has external inputs as well as inputs from the preceding cell, which are distinguished by defining the outputs of a cell as its state. Figure 2.1 - Iterative Switching Systems

The second proposal which will b reviewed was presented by Fredrick J. Hil and Gerald R. Peterson published in (1981, pg. 570), titled - Introduction to Switching Theory and Logic Design, in this book, they discussed that iterative network is highly repetitive form of a combinational logic network. The repetitive structure make possible to describe the iterative networks utilizing techniques that already developed for sequential circuits, the author in this books he has limited his discussion to one dimensional iterative networks represented by the cascade or identical cells given in below figure. A typical cell with appropriate input and output notation is given in one more figure below (b). Now note the two distinct types of inputs, i.e., primary inputs from the outside world and secondary inputs from the previous cell in the cascade. And similarly and there are two types of outputs, i.e., primary to the outside world and secondary to the next cell in the cascade. The boundary inputs which are at the left of the cascade denoted by us in the same manner as secondary inputs. At some cases the inputs will be constant values.

A set of boundary inputs emerges from the right most cell in the cascade. although these outputs are to the outside world, they will be labelled in the same manners secondary outputs. The boundary outputs will be the only outputs of the iterative networks.

The third proposal by Barri Wilkinson with Raffic Makki, published in (1992, pg. 72-4) titled -digital design principles, in this book, they discussed about the design and problems of iterative circuits and stated that, there are some design problems which would require a large number of gates if designed as two level circuits. On approach i.e., is to divide each function into a number of identical sub functions which need be performed in sequence and the result of one sub function is used in the next sub function. A design based around the iterative approach is shown in below figure. There are seven logic circuit cells each cell accepts one code word digit and the output from the preceding cell. The cell produces one output, Z, which is a 1 whenever the number of 1's on the two inputs is odd. Hence successive outputs are a 1 when the number of 1's on inputs to that point is odd and the final output is a 1 only when the number of 1's in the whole code word is odd as required.

To create an iterative design, the number of cells and the number of data inputs to each cell need to be determined and also the number of different states that must be recognized by the cell. The number of different states will define the number of lines to the next cell (usually carrying binary encoded information).

The fourth proposal was reviewed by Douglas Lewin and David Protheroe published in (1992, pg. 369),titled - Design of Logic systems, in this book, according to them, iterative networks were widely used in the early days of switching systems when relays were the major means of realizing logic circuits. these technique fell into disuse when electronic logic gates widely available. It is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in the sense ‘regular' in that the overall function may be achieved by performing the same operation up to a sequence of a data bits. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with large numbers of parallel inputs.

The method is also directly applicable to the design of VLSI circuits and has the advantage of producing a modular structure based on a standard cell which may be optimized independently in terms of layout etc. Circuits containing any number of input variables can easily be constructed by simply extending the network with more cells. they examine the iterative circuits with some examples, although it is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in this sense ‘regular' in that the overall function may be achieved by performing the same operation upon a sequence of data bits.

Suppose a logic system could be decomposed into a number of identical subsystems; then if we could produce a design for the subsystem, or cell, the complete system could be synthesized by cascading these cells in series. Problem Reduced: this problem now has been reduced to that of specifying and designing the cell, rather than the complete system.

The fifth proposal presented by Brians Holdsworth published in (1993, pg. 165-166) titled - Digital Logic Design, stated that iterative networks widely used before the introduction of electronic gates are again of some interest to the logic designers as a result of developments in semiconductor technology. Moss pass transistors which are easily fabricated are used in LSI circuits where these LSI circuits require less space and allow higher packing densities. One of the major disadvantages of hard-wired iterative networks was the long propagation delays because of the time taken for signals to ripple through a chain of iterated cells. This is no longer such a significant disadvantage since of the length of the signal paths on an LSI chip are much reduced in comparison with the hard-wired connections between SSI and MSI circuits. However, the number of pass transistors that can be connected in series is limited because of signal degradation and it is necessary to provide intercell buffers to restore the original signal levels. One additional advantage is the structural simplicity and the identical nature of the cells which allows a more economical circuit layout.

A book proposed by Brians Holdsworth and R.C. Woods published in (2002, pg.135), titled - Digital Logic Design, in this book, the discussion on the structure has made and stated that iterative network consists of number of identical cells interconnected in a regular manners as shown in figure with the variables X1..........Xn are termed as primary input signals while the output signals termed as Z1...............Zn and another variable is also taken are termed as secondary inputs or outputs depending on whether these signals are entering or leaving a cell. The structure of an iterative circuit may be defined as one which receives the incoming primary data in parallel form where each cell process the incoming primary and secondary data and generates a secondary output signal which is transmitted to the next cell. Secondary data is transmitted along the chain of cells and the time taken to reach steady state is determined by the delay times of the individual cells and their interconnections.

According to Larry L. Kinney, Charles .H and Roth. JR, published in (2004, pg.519) titled - Fundamentals of Logic design, in this book they discussed that many design procedures used for sequential circuits can be applied to the design of the iterative circuits, they consists of number of identical cells interconnected in a regular manner. Some operations such as binary addition, naturally lend themselves to realization with an iterative circuit because of the same operation is performed on each pair input bits. The regular structure of an iterative circuit makes it easier to fabricate in integrated circuit from than circuits with less regular structures, the simplest form of a iterative circuit consists of a linear array of combinational cells with signals between cells travelling in only one direction, each cell is a combinational circuit with one or more primary inputs and possibly one or more primary outputs. In addition, each cell has one or more secondary inputs and one or more secondary outputs. Then the produced signals carry information about the “state” of one cell to the next cell. The primary inputs to the cells are applied in parallel; that is, they are applied at the same time, the signals then propagate down the line of cells. Because the circuit is combinational, the time required for the circuit to reach a steady- state condition is determined only by the delay times of the gates in the cell. As soon as steady state is reached, the output may be read. Thus, the iterative circuits can function as a parallel- input, parallel-output device, in contrast with the sequential circuit in which the input and output are serial. One can think of the iterative circuits as receive its inputs as a sequence in time.

Example: parallel adder is an example of iterative circuits that has four identical cells. The serial adder uses the same full adder cell as he parallel adder, but it receives its inputs serially and stores the carry in a flip-flop instead of propagating it from cell to cell.

The final proposal was authored by JOHN F WAKERLY, published in (2006, pg. 459, 462, 756), titled - Digital Design Principles, in this book he quoted that, iterative circuits is a special type of combinational circuits, with the structure shown in below figure. This circuit contains n identical modules, each of which contains both primary inputs and primary outputs and cascading inputs and cascading outputs. The left most cascading inputs which is shown in below figure are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and these cascading output provides important information. Iterative circuits are well suited to problems that can be solved by a simple iterative algorithm:

  1. Set C0 to its initial value and set i=0
  2. Use Ci and Pli to determine the values of P0i and Ci+1.
  3. Increment i.
  4. If i

In an iterative circuit, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

Each of the works reviewed makes an important contribution to improve the disadvantages and problems by iterative circuits, which is lead to improving the iterative circuits, thus it is appealing me to pursue an investigation on the sequential circuits for better understanding about the iterative circuits



3.1 Iterative design

Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Changes and refinements are made, in the most recent iteration of a design, based on the results of testing. The quality and functionality design can be improved by this process. The interaction with the designed system is used as a research for informing and evolving a project, as successive versions in Iterative design.

3.2 Iterative Design Process

The iterative design process may be applied throughout the new product development process. In the early stages of development changes are easy and affordable to implement. In the iterative design process the first is to develop a prototype. In order to deliver non-biased opinions the prototype should be examined by a focus group which is not associated with the product. The Information gained from the focus group should be integrated and synthesized into next stage of iterative design. This particular process must be recurred until an acceptable level is achieved for the user. Figure 3.1 Iterative Design Process

3.3 Iterative Circuits

Iterative Circuits may be classified as,

  • Combinational Circuits
  • Sequential Circuits.

Combinatorial circuit generalized using gates has m inputs and n outputs. This circuit can be built as n different combinatorial circuits, apiece with exactly one output. If the entire n-output circuit is constructed at once then some important sharing of intermediate signals may take place. This sharing drastically decreases the number of gates needed to construct the circuit.

In some cases, we might be interested to minimize the number of transistors. In other, we might want a little delay, or we may need to reduce the power consumption. Normally a mixture of such type must be applied.

In combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. For a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth method can lead to laborious and inelegant solutions. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with a large number of parallel inputs, circuits specification is simplified and large variable problems reduced to a more tractable size, this method is directly applicable to the design of VLSI circuits. It should be pointed out though that the speed of the circuit is reduced because of the time required for the signals to propagate along the network; the number of interconnections is also considerably increased. In general, iterative design does not necessarily result in a more minimal circuit. As the advantage of producing a modular structure, circuits containing any number of input variables can be easily constructed by simple extending the networks with more cells. Suppose for example a logic system could be decomposed into number of identical sub subsystems, then if we would produce a design for the subsystem or a cell the complete system could be synthesized by cascading these cells in series. The problem has now been reduced to that of specifying and designing the cell, rather than the complex systems

In general, we define a synchronous sequential circuit, or just sequential circuit as a circuit with m inputs, n outputs, and a distinguished clock input. The description of the circuit is made with the help of a state table with latches and flip-flops are the building blocks of sequential circuits.

The definition of a sequential circuit has been simplified as the number of different states of the circuit is completely determined by the number of outputs. Hence, with these combinational circuits we are going to discuss a normal method that in the worst case may waste a large number of transistors For a sequential circuit with m inputs and n outputs, our method uses n D-flip-flops (one for each output), and a combinatorial circuit with m + n inputs and n outputs.

3.4 Iterative Circuits-Example

An iterative circuit is a special type of combinational circuit, with the structure shown, The above diagram represents the iterative circuits and this circuit contains ‘n' identical modules each of which has both primary inputs and outputs and cascading inputs and outputs. The left most cascading inputs are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and usually provide important information.

Quiet often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of truth table is inadequate. Iterative circuits are well suited to problems that can be solved by an algorithm i.e iterative algorithm

  • Set C0 to initial value and set i to 0.
  • Use Ci and Pli to determine the values of P0i and Ci+1.
  • Increment i.
  • If i

In an iterative circuits, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

3.5 Improving the testability of Iterative Circuits

As stated by A.Rubio et al, (1989, pg.240-245), the increase in the complexity of the integrated circuits and the inherent increase in the cost of the test carried out on them are making it necessary to look for ways of improving the testability of iterative circuits.The integrated circuits structured as iteration of identical cells, because their regularity have a set of advantages that make them attractive for many applications. Among these advantages are their simplicity of design, because the structural repetition of the basic cell, manufacturing, test, fault tolerance and their interest for concurrent algorithmic structure implementation. Here in this journal we also study about the testability of iterative circuits the below figure illustrates the typical organization of an N-cells iterative unidimensional circuit (all the signals go from left to right); however the results can be extended to stable class of bilateral circuits.

The N cells have identical functionality. Every cell (i) has an external input yi and an internal input xi coming from the previous cell (i-1). Every cell generates a circuit output signal yi and an internal output xi that goes to the following cell (i+1).The following assumptions about these signals are considered below

  1. All the yi vectors are independent.
  2. Only the x1, y1, y2.............yn signals are directly controllable for test procedures.
  3. Only the y1, y2 ...yn signals are directly observable.
  4. The xi and ^xi signals are called the states (input and output states respectively) of the ith-cell and are not directly controllable (except xi) neither observable (except xn).

Kautz gives the condition of the basic cell functionality that warrants the exhaustive testing of each of the cells of the array. These conditions assure the controllability and observability of the states. In circuits that verify these conditions the length of the test increase linearly with the number of cells of the array with a resulting length that is inferior to the corresponding length for other implementation structures.

A fundamental contribution to the easy testability of iterative circuits was made by Freidman. In his work the concept of C-testability is introduced; an iterative circuit is C-testable if a cell-level exhaustive test with a constant length can be generated. This means the length is independent of the number of cells composing the array (N). The results are generalised in several ways. In all these works it is assumed that there is only one faulty cell in the array. Cell level stuck-at (single or multiple) and truth-table fault models are considered. The set T of test vectors of the basic cell is formed by a sequence (what ever the order may be) of input vectors to the cell.

Kautz proposed the cell fault model (CFM) which was adopted my most researchers in testing ILAs. As assumed by CFM only one cell can be faulty at a time. As long as the cell remains combinational, the output functions of the faulty cell could be affected by the fault. In order to test ILA under CFM every cell should be supplied with all its input combinations. In Addition to this, the output of the faulty cell should be propagated to some primary output of the ILA. Friedman introduced c-testability. An ILA is C-testable if it can be tested with a number of test vectors which are independent of the size of the ILA.

The target of research in ILA testing was the derivation of necessary and sufficient conditions for many types of ILAs (one dimensional with or without vertical outputs, two-dimensional, unilateral, bilateral) to be C-testable. The derivations of these conditions were based on the study of flow table of the basic cells of the array. In the case of an ILA which is not C-testable modifications to its flow table (and therefore as to its internal structure) and/or modifications to the overall structure of the array, were proposed to make it C-testable. Otherwise, a test set with length usually proportional to the ILA size was derived (linear testability). In most cases modifications to the internal structure of the cells and/or the overall structure of the ILA increase the area occupied by the ILA and also affect it performance.

ILA testing considering sequential faults has been studied, sequential fault detection in ripple carry adders was considered with the target to construct a shortest length sequence. In sufficient conditions for testing one dimensional ILAs for sequential faults were given. It was not shown that whenever the function of basic cell of an ILA is bijective it can be tested with constant number of tests for sequential faults. To construct such a test set like this a procedure was also introduced.

The following considerations from the basis of our work. Many of the computer aided design tools are based on standard cells libraries. While testing an ILA, the best that can be done is to test each of its cells exhaustively with respect to the CFM. The derivation of such a test set(either C-test set or linear-test set, depending on whether the conditions for C-testability for the particular type of the ILA are satisfied or not) has been extensively studied. However, the functional verification of each cell under CFM is not adequate in CMOS ILAs since it does not detect sequential faults like stuck- open and bridging faults. The existence of such faults that transform the single faulty cell into a sequential one, crates the need for the application of sequences of cell input patterns to each cell, in order to detect such faults. In the case of standard cells libraries where the physical design of a cell is not given, the test of a cell for realistic faults should have been derived by the library designer and be provided along with every cell.

The above work is based on the fact that a complete test set for either a one-dimensional or a two-dimensional ILA, with respect to the CFM, can be constructed using any of the procedures proposed in the open literature, and gives a method to transform this test set into a complete test set for more realistic fault models (for example stuck-open faults in CMOS ILAs). Conditions are given so that any useful properties, such as C-testability or linear testability are preserved. The author also considered the Optimization of the length of the resulting test set. The results of this work can be used in various types of ILAs and in cells with various functions (not only bijective ones). Extensions can be made for the general case of n-pattern testing of each cell.



4.1 Binary Arithmetic-Circuits

Binary arithmetic is a combinatorial problem. For the binary arithmetic, it may seem insignificant to use the methods which we have already seen for designing combinatorial circuits to obtain circuits.But a problem persists; the general method to create these kinds of circuits would use too many gates. We must look for different routes.

4.2 Adder Circuits

In electronics, Addition or adder or summer is the most commonly performed arithmetic operations in digital systems. An adder combines two arithmetic operands using the addition rules that performs addition of numbers. In present day computers, adders reside in the arithmetic logic unit (ALU) where other operations are performed. For many numerical representations, such as Binary-coded decimal or excess-3, adders can be constructed; basic adders function on binary numbers. It is trifling to vary an adder into an adder-substracter, where twos complement or ones complement can be used to represent negative numbers.

4.3 Types of adders

Adder circuits can be classified as,

  • A Half Adder
  • A Full Adder

A half adder can add two bits. It has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. C is the AND of A and B and S is the two-bit XOR of A and B. Fundamentally, the half adder output would be the sum of two one-bit numbers, with C being the most important of these two outputs. Figure 4.1 - Half Adder

A full adder is a combinatorial circuit (or actually two combinatorial circuits) and its function is to add two binary digits plus a carry from the previous position that gives a two bit result of three inputs and two outputs, as well as the normal output and the carry to the next position. In this particular instance we have used uneven labels, x and y for the inputs, c-in for the carry-in, c-out for the carry-out, and s for the sum output.

A full adder can be trivially built using our ordinary design methods for combinatorial circuits. Here is the resulting circuit diagram:

The next step is to combine a series of such full adders into a circuit that can add (say) two 8-bit positive numbers. We proceed by linking the carry-out obtained from one full adder to the carry-in of the full adder which is to its immediate left. The full adder on the right most takes a 0 on its carry-in. Figure 4.3 - Series of Full Adder

For the i-th binary position, we have used subscript i.

The depth of this circuit is very large. With the help of inputs of position 0, the output and carry from position 7 is determined in part. With a corresponding delay as a result, the signal must traverse all the full adders.

Intermediate solutions can be seen between the two extreme ones. (i.e., an iterative combinational circuit with one-bit adders as elements and combinatorial circuit for the entire 32-bit adder). An 8-bit adder can be build as a normal two-level combinatorial circuit and four 8-bit adders from a 32-bit adder. An 8-bit adder can trivially be build from 65536 (216) and-gates, and a giant 65536-input or-gate.

4.4 Coding

4.4.1 Program for 16 bit Adder

4.5 Binary Subtraction

Binary Subtraction can bee done, notice that in order to compute the expression x - y, instead can compute the expression x + -y. By inverting all the bits and adding 1 we can negate a number, this can be learnt from the above section on binary arithmetic. Thus, we can compute the expression as x + inv(y) + 1. To add 1, we can utilise an unused carry-in signal to position 0. Adding 1 on this input adds one to the result. The complete circuit with addition and subtraction looks like this:

4.6 Binary multiplication and Division

Binary multiplication is even harder than binary addition. We don't have a good iterative combinatorial circuit, in this instance we have to make use of heavier artillery. The key is to use a sequential circuit which computes one addition for every clock pulse.

4.7 Binary Comparator

The purpose of a two-bit binary comparator is quite simple, which has a comparison unit for receiving a first bit and a second bit to thereby compare the first bit with the second bit; and an enable unit for outputting a comparison result of the comparison unit as an output of the 2-bit binary comparator according to an enable signal.. It determines whether one 2-bit input number is larger than, equal to, or less than the other. The first step in the creation of the comparator circuit is the generation of a truth table that lists the input variables, their possible values, and the resulting outputs for each of those values. The truth table used for this experiment is shown in below Table 4 - Binary Comparator

4.8 Parity Generation or Even-Odd Detection

Parity bits are extra signals which are added to a data word to enable error checking. Parity bits are of two types, even and odd. Logic 1 will be produced as output by the even parity generator if the data word has an odd number of 1's. The output of the parity generator will be low if the data words have even number of 1's. By concatenating the Parity bit to the data word, a word will be formed which always has an even number of ones i.e. has even parity. Parity is used in memory systems and modem lines. The data is said to be corrupted if the data word sent is even parity and received as odd parity, in this case the data is resent. The name implies that Odd Parity generator operation is similar but it produces odd parity. For various 8-bit data words the parity generator outputs can be seen in the table shown below

4.9 Iterative Approach

A circuit which could be used to generate even parity for 4-bit data is shown on the left. If we follow this approach is for 16 or 32 bit data busses, the follow-on circuit will be hefty, with complex interconnect. A linear architecture for VLSI implementation and use is shown below.

A one-bit cell has been designed so that this can be used to form an n-bit circuit in this approach. Each cell in the below circuit is identical with one bit of the data word fed in from below, and a 1,0 pair fed in from the left hand side. A 1,0 pair passes into the left hand side of the next cell from the right hand edge of the cell. The 1,0 pair pass through the cell unchanged, if the data input is low. The 1,0 pair is swapped, if the input is high (i.e. becomes 0,1).

Thus X=0 and Y=1 when all data inputs are low.

If one data input is high then X=1 and Y = 0

If two data inputs are high then X=0, Y=1 and so on

i.e., for an odd number of 1s: X=1, for an even number of 1s: X=0.

4.9.1 Program



As the density of VLSI products increases, their testing becomes more difficult and costly. Generating test patterns has shifted from a deterministic approach, in which a testing pattern is generated automatically based on a fault model and an algorithm, to a random selection of test signals. While in real estate the refrain is “Location!” the comparable advice in IC design should be “Testing! Testing! Testing!”. No matter whether deterministic or random generation of testing patterns is used, the testing pattern applied to the VLSI chips can no longer cover all possible defects. Consider the manufacturing processes for VLSI chips as shown in Fig. 1. Two kinds of cost can incur with the test process: the cost of testing and the cost of accepting an imperfect chip. The first cost is a function of the time spent on testing or, equivalently, the number of test patterns applied to the chip. This cost will add up to the cost of the chips. The second cost represents the fact that, when a defective chip has been passed as good, its failure may become very costly after being embedded in its application. An optimal testing strategy should trade off both costs and determine an adequate test length (in terms of testing period or number of test patterns).

Apart from the cost, two factors need to be considered when determining the test lengths. The first is the production yield, which is the probability that a product is functionally correct at the end of the manufacturing process. If the yield is high, we may not need to test extensively since most chips tested will be “good,” and vice versa. The other factor to be considered is the coverage function of the test process. The coverage function is defined as the probability of detecting a defective chip given that it has been tested for a particular duration or a given number of test patterns. If we assume that all possible defects can be detected by the test process, the coverage function of the test process can be regarded as a probability distribution function of the detection time given the chip under test is bad. Thus, by investigating the density function or probability mass function, we should be able to calculate the marginal gain in detection if the test continues. In general, the coverage function of a test process can be obtained through theoretical analysis or experiments on simulated fault models. With a given production yield, the fault coverage requirement to attain a specified defect level, which is defined as the probability of having a “bad” chip among all chips passed by a test process While most problems in VLSI design has been reduced to algorithm in readily available software, the responsibilities for various levels of testing and testing methodology can be significant burden on the designer.

The yield of a particular IC was the number of good die divided by the total number of die per wafer. Due to the complexity of the manufacturing process not all die on a wafer correctly operate. Small imperfections in starting material, processing steps, or in photomasking may result in bridged connections or missing features. It is the aim of a test procedure to determine which die are good and should be used in end systems.

  • Testing a die can occur:
  • At the wafer level
  • At the packaged level
  • At the board level
  • At the system level
  • In the field

By detecting a malfunctioning chip at an earlier level, the manufacturing cost may be kept low. For instance, the approximate cost to a company of detecting a fault at the above level is:

  • Wafer $0.01- $.1
  • Packaged-chip $0.10-$1
  • Board $1-$10
  • System $10-$100
  • Field $100-$1000

Obviously, if faults can be detected at the wafer level, the cost of manufacturing is kept the lowest. In some circumstances, the cost to develop adequate tests at the wafer level, mixed signal requirements or speed considerations may require that further testing be done at the packaged-chip level or the board level. A component vendor can only test the wafer or chip level. Special systems, such as satellite-borne electronics, might be tested exhaustively at the system level.

Tests may fall into two main categories. The first set of tests verifies that the chip performs its intended function; that is, that it performs a digital filtering function, acts as a microprocessor, or communicates using a particular protocol. In other words, these tests assert that all the gates in the chip, acting in concert, achieve a desired function. These tests are usually used early in the design cycle to verify the functionality of the circuit. These will be called functionality tests. The second set of tests verifies that every gate and register in the chip functions correctly. These tests are used after the chip is manufactured to verify that the silicon in intact. They will be called manufacturing tests. In many cases these two sets of tests may be one and the same, although the natural flow of design usually has a designer considering function before manufacturing concerns.

5.1 Manufacturing Test Principles

A critical factor in all LSI and VLSI design is the need to incorporate methods of testing circuits. This task should proceed concurrently with any architectural considerations and not be left until fabricated parts are available.

Figure 5.1(a) shows a combinational circuit with n-inputs. To test this circuit exhaustively a sequence of 2n inputs must be applied and observed to fully exercise the circuit. This combinational circuit is converted to a sequential circuit with addition of m-storage registers, as shown in figure 5.1b the state of the circuit is determined by the inputs and the previous state. A minimum of 2(n+m) test vectors must be applied to exhaustively test the circuit. Clearly, this is an important area of design that has to be well understood.

5.2 Optimal Testing

With the increased complexity of VLSI circuits, testing has become more costly and time-consuming. The testing strategy design, involves the cost of testing trade-off and the penalty of passing a bad chip as good. Initially most favourable testing period derived, assuming that the production yield is better known. Since the production yield might be unknown, the finest sequential testing strategy which estimates ongoing testing results from the yield based, which in turn determines the most favourable testing period, is next developed. Lastly, the finest sequential testing strategy for batches in which N chips are tested concurrently is on hand. These results are of use if the yield stays stable or changes from one to another manufacturing run.



6.1 VLSI Design

VLSI Design and implementation can be classified based on the Prototype,

  • ASIC
  • FPGA

6.2 Implementation Styles

Although implementations of electronic systems can span a number of levels of physical hierarchy, in reality implementations in semiconductor components and boards dominate. Most of designing leads to PCBs populated with standard components. Progress in component and board technology has spawned particular implementation styles.

The designing and implementation is described through a flow chart below, this flow chart clearly separates a technology -independent design phase from a technology- independent implementation phase. It also suggests an oversimplified view of the world in which a design is implemented in a single FPGA, gate array, cell- based ASIC, and so forth.

The simplified view of the process tends to be presented dogmatically, since it provides protection against designs being taken hostage by implementation style, and hence by supplier. But recognizably great designs exhibit a nebulous elegance due to some technology dependence, engineers has usually exploited some technology- specific feature to meet or exceed required measures in an economic way. This implifies information flow across the interface from implementation to design whether by formal or informal means. It may be in the form of design rules, recommended practice, and so on, but is more likely to consist of knowledge from design experience and practice.

6.3 ASIC- Application Specific Integrated Circuits

An application-specific integrated circuit (ASIC) is an integrated circuit (IC) tailored for a particular use instead of a general-purpose use. A very good example is a chip designed to run a cell phone is an ASIC. The 7400 or the 4000 series, are application specific standard products (ASSPs), are like intermediate between ASICs and industry standard integrated circuits.Over the years, the maximum functionality (Complexity) possible in an ASIC has increased from 5,000 gates to over 100 million. 21st century ASICs include ROM, RAM, EEPROM, entire 32-bit processors, Flash and other hefty constructed blocks. This kind of ASIC is often called a SOC (system-on-a-chip). The functionality of ASICs is described by the designers using hardware description language (HDL), such as Verilog or VHDL.

6.4 Reviews on FPGA

In chapter 6, A investigation was made on design and implementation styles on FPGA with VLSI designing and also has a quick look on ASIC (application specific integrated circuits) as the discussion basically depends on FPGA, so the discussion is main targeted for FPGA with a relevant example section 6.5.2, corresponding to that benefits of FPGA technology is discussed in section 6.6, and the development software and testing were included in this chapter with the sections 6.8 & 6.9

In the below section, an investigation on historical back ground of FPGA and its architecture and a concept with distinctive new possibilities are discussed by using a VLSI technology.

6.4.1 Historical background

The origins of much today's FPGA research are in work in the late 1960s and early 1970s on cellular arrays. This work was mainly concerned with improving the fault tolerance of logic structures, thus allowing larger silicon areas or whole wafers to be used to implement logic. The method proposed was to cover the wafer with a regular array of restructurable cells capable of implementing general logic functions. Both fuse- and flip-flop- programmed structures were proposed and investigated. Important early work in this area was done my Manning [Mann77], Minnick [Minn64], Wahlstrom [Wahl67], and Shoup [Shoup70]. A good survey article of the early research appears in [Minn67].

As background research on FPGA, the first proposal was reviewed by John V. Oldfield and Richard C. Dorf published in (1995, pg. 95) in this book, they discussed that the field programmable gate array (FPGA) is a relatively new type of component for the construction of electronic system, particularly those using digital, or more correctly logical, circuit principles. This FPGA consists of an array of functional blocks along with an interconnection network and as the name itself implies, its configuration can be determined in the field, that is, at the point of application. The specific function of each block and the connections between blocks are prescribed by the user. The FPGA market has a wide range of architectures and alternative ways of controlling configurations. The FPGA take its place in the continuing evolution of very-large scale integrated (VLSI) circuit technology toward denser and faster circuits. Although present-day FPGA components have only a few thousand or tens of thousands of gates, future ones will have hundreds of thousands of eventually millions of gates,

We may except to see the development of pipelined and parallel processing in which the system configuration changes dynamically. An image processing system could divide computations into parallel tasks for a large number of processors realized within the same FPGA chip. While such schemes may seem futuristic, they suggest that the FPGA should not be thought of as just a VLSI component that replaces others, but as a concept with distinctive new possibilities. In the present day realizations, the FPGA has significant advantages for the development of prototype systems and the early introduction to the market. The benefits are similar to those associated with the introduction of the microprocessor in the late 1970s, such as programmability and adaptability, but with additional advantages in speed, compactness, and design protection, as for the educational purposes designing with FPGAs requires computer assistance at almost every stage of designing, including detailed specification, simulation, placement, and routing, and calls for an overall systematic design methodology. And this is an important aspect in the education for the future engineers, for the future improvement we need to improve both the performance and quality of the systems they produce. At the same time, the increase in designer productivity makes it possible to consider alternative implementations at higher level than previously, and should not cramp the creativity of a system designer.

The second proposal was presented by Charles E.stroud and Nur A. Touba published in (2008,pg.549-550) titled - system on chip test architectures, in the e- books the authors mentioned that, since mid-1980s, field programmable gate arrays(FPGAs) have become a dominant implementation medium for digital systems. At that time, FPGAs developed into complex units and ability to execute system functions has improved from a few thousand to tens of millions of logic gates. The largest FPGAs currently available exceed billion transistors. FPGAs can be programmed and configured to perform any digital logic function which provides an attractive choice, not only for fast time-to-market systems but also for rapid prototyping of digital systems.

The final proposal by Pong p chu, published in (2008, pg.11) titled - FPGA prototyping by VHDL examples : Xilinx Spartan-3 version, in this e-book, he quoted that field programmable gate array(FPGA) is a logic device that contains two-dimensional array of general logic cells and programmable switches. A logic cell can be programmed to perform a simple function, and a programmable switch can be customized to provide interconnections among the logic cells. A custom design can be implemented by specifying the function of each logic cell and selectively setting the connection of each programmable switch. Once the design and synthesis is completed, we can use a simple adaptor cable to download the desired logic cell and switch configuration to FPGA device and obtain the custom circuit. Since this process can be done “in the field” rather than “in fabrication facility (fab),” the device is known as field programmable

6.5 Architecture

According to R.C. Seals and G.F. Whapshott (1997), for FPGAs, the variety of architectures is larger as each manufacturer develops concepts for particular niche markets and this, coupled with the non- deterministic nature of the timing for place and route, makes them more difficult to incorporate into designs. Due to the considerable differences in internal architecture, it is difficult to make appropriate comparisons, so a number of different architectures will be considered and their specific properties outlined. The development tools available range from low-cost pc based package to costly high-end workstation packages and there is an encouraging move towards standardisation through two avenues that of standard formats for intermediate stage files or through standardised design languages such as VHDL. Because FPGAs are non-deterministic in the propagation delays of synthesized designs, extensive use is made of simulators to verify that the design and the place and route implement the specified design. Simulation can be considered to take place at two levels, that of functional simulation which confirms that the design is correct and then gate level simulation, to confirm that the design after place and route still implement the design correctly. The sequence of design and simulation is as follows. The initial design is followed by functional simulation and then place and routing. A further simulation at the gate level follows and, if all is correct, the device will be programmed. Any faults identified are corrected before proceeding to the next stage. The use of FPGAs in digital system design is very much a ‘hands-on' process and the quickest approach is to tackle a non-trivial design.

FPGAs generally consists of two-dimensional array of logical blocks (PLBs) interconnected by a programmable routing network with programmable input/output (I/O) cells at the periphery of the device, as illustrated in below diagram.

In some FPGAs, two I/O cells can be combined to support differential pair I/O standards. A trend in FPGAs is to include cores for specialized functions such as single-port and dual-port RAMs, first-in-first-out (FIFO) memories, multipliers, and DSPs. Within any given FPGA, all memory cores are usually of the same size in terms of the total number of memory bits, but each memory cores in the arrays is individually programmable.

6.5.1 Example

RAM cores can be individually configured to select the mode of operation (single-port, dual-port, FIFO, etc.), architecture in terms of number of addresses locations versus number of data bits per address location, active edge of clocks, and active level of control signals (clock enables, set/reset, etc.), as well as options registered input and output signals. Another trend is the incorporation of one or more embedded processor cores. These processor cores range in size from 8-bit to 64-bit architectures and can first FPGA in the chain operates in master mode to supply the configuration clock (CCLK) to read the configuration data from the PROM are passed on to the next FPGA in the daisy chain by connecting the configuration data input (Din) to the data output (Dout). Many FPGAs provide serial and parallel configuration interfaces for both master and slave configuration modes. The parallel interface is used to speed up the configuration process by reducing the download time via parallel data transfer. The most common parallel interface use 8-bit or 32-bit configuration data buses.

This facilitates configuration of FPGAs operating in slave mode by processor which can also perform reconfiguration r dynamic partial reconfiguration of the FPGAs once the system is operational. To avoid a large number of pins dedicated to a parallel configuration interface, these parallel data bus pins can be optionally reused for normal systems functions signals once the device is programmed. The IEEE standard boundary scan interface, in conjunction with associated instructions to access the configuration memory, an also be used for slave serial configuration in many FPGAs.

6.6 Benefits of FPGA Technology

  1. Performance
  2. Time to Market
  3. Cost
  4. Reliability
  5. Long-Term Maintenance

6.6.1 Performance - Taking advantage of hardware parallelism, FPGAs exceed the computing power of digital signal processors (DSPs) by breaking the paradigm of sequential execution and accomplishing more per clock cycle. According to BDTI, a benchmarking firm and a noted analyst, show how FPGAs can convey many times the processing control per dollar of a DSP solution in a few applications. Managing inputs and outputs (I/O) at the hardware level provides faster response times and specialized functionality to closely match application requirements.

6.6.2 Time to market - FPGA technology offers flexibility and rapid prototyping capabilities in the face of increased time-to-market concerns. Instead of following the long fabrication process of conventional ASIC design, the idea can be tested and verified on the hardware. Also can then implement incremental changes and iterate on an FPGA design within hours instead of weeks. With different types of I/O already connected to a user-programmable FPGA chip, Commercial off-the-shelf (COTS) hardware is also accessible. With the availability of high-level software tools, the learning curve with layers of abstraction decreases and it includes valuable prebuilt functions for advanced signal and control processing.

6.6.3 Cost - The nonrecurring engineering (NRE) expense of custom ASIC design far exceeds that of FPGA-based hardware solutions. It is easy to justify large initial investment in ASICs, for OEMs shipping thousands of chips per year, but for the tens to hundreds of systems in development many end users need custom hardware functionality. There are no long lead times for assembly or no cost of fabrication because of the very nature of programmable silicon. With changing system requirements over time, the cost to respin an ASIC is very high when compared to making incremental changes to FPGA designs

6.6.4 Reliability - While software tools provide the programming environment, FPGA circuitry is truly a “hard” implementation of program execution. To share resources among multiple processes and help schedule tasks, processor-based systems involve several layers of abstraction. The operating system manages processor and memory bandwidth and the driver layer controls hardware resources. For any given processor core, processor-based systems are continually at risk of time-critical tasks pre-empting one another, and only one instruction can execute at a time. FPGAs, which are not making use of operating systems, reduce reliability concerns with deterministic hardware dedicated to every task and true parallel execution.

6.6.5 Long-term maintenance - As mentioned earlier, FPGA chips are field-upgradable and do not require the time and expense involved with ASIC redesign. Digital communication protocols can change over time, and ASIC-based interfaces might root maintenance and forward compatibility challenges. As FPGA chips are reconfigurable, they able to keep up with future modifications which may be essential. With time the system matures and functional enhancements can be made without modification of the board layout or redesigning the hardware..

6.7 Choosing an FPGA

When examining the specifications of an FPGA chip, note that they are often divided into configurable logic blocks like slices or logic cells, fixed function logic such as multipliers, and memory resources like embedded block RAM. These FPGA chips are typically the most important when comparing and selecting FPGAs for a specific application.

Table 1 shows resource specifications used to compare FPGA chips within various Xilinx families. The number of individual components inside an FPGA cannot be used for comparison of size with the ASIC technology. Xilinx could not specify the number of equivalent system gates for the new Virtex-5 family due to one of these reasons.

6.8 Development Software

As FPGAs are much more complex than PLDs, the software used during the development process is correspondingly more complex and has to perform additional functions. The designs are larger and more complex and are usually described in abstract functions using hardware description language or schematic diagrams. In comparisons, many PLD design can be described adequately using Boolean equations.

The design development process can be considered to consist of a sequence of six steps of decreasing abstraction, as listed below:

  1. Design specification
  2. Conversion of the specification into a logical consistent description suitable for entry into a CAD system.
  3. Compiling the entered design.
  4. Simulating the design.
  5. Programming the target device.
  6. System commissioning and testing.

6.9 FPGA Testing

While coming in to the FPGA testing, there is no special challenge in testing FPGA. The challenges faced are common in testing any other digital VLSI circuit. The DFT techniques developed for ASIC/custom digital circuits directly applicable to FPGA. ASI test methodology is fairly well developed area in both theory (science) and practice (art). The software tool support is mature and is an industry in its own rights. It definitely makes a lot of sense to adopt the proven practices of the ASI world in testing FPGA. on balance FPGA uses the same CMOS technology which is standard in ASIC. In addition, as mentioned earlier, the end applications of FPGA are mid range “ASIC”. FPGA design tools have the same look and feel as of the ASIC tools (design entry in HDL, synthesis, place & route and verification). Since a specific design on FPGA results into an ASIC like netllist, then an appropriate FPGA architecture that has dedicated scan register can easily adopt the ASIC scan insertion technique. FPGA is a standard device. There are thousands of different designs done on the same device. FPGA test style must be benchmark, even and independent of the end application. This is our first requirement for FPGA test.

It seems like that we cannot directly apply the ASIC test methodology to FPGA. By studying this journal we understood that any VLSI design and test, tells us the problem of testability boils down to having controllability and observability at every node in the circuit. So that this requirement has taken ASIC test methodology toward the scan insertion method, in FPGA, because of its configurability, every gate level input node can be observed at a primary output using the FPGA resource. So we do not need an scan insertion. FPGA is already carrying all the extra test baggage with it, knowing that every gate level input node is observable, then all gate level structural faults are testable. In practical it must be possible to devise series of test circuits to exercise all nodes.

However, the FPGA test time is dominated by number of configuration patterns (test circuits) hat is employed. The simplest test schemes require the number of configurations to grow with FPGA array size. It is only natural that for the bigger arrays with more resources, we must spend more configuration test circuits and time testing them. “FPGA test style must be scalable and independent of array size”. This is our second requirement for FPGA test.

Several researchers have mentioned the importance and usefulness of the concept of iterative logic arrays (ILA) in FPGA test. FPGA specially lends itself to this implementation because of its inherent regularity. Some of the ILAs have C-testable property, this means that test suite is independent of the array size.

One has to make sure that the FPGA test methodology must have a readily measurable test quality metrics. This is our last requirement for FPGA test.

Special challenges of FPGA test methodology are:

  1. Generic, uniform and independent of end application
  2. Scalable and independent of array size
  3. Reusable and lends its self to automation
  4. Carries readily measurable test quality metrics



VERILOG was started in the year 1984 by Gateway Design Automation Inc as a proprietary hardware modeling language. It is rumoured that the original language was designed by taking features from the most popular HDL language of the time, called HiLo, as well as from traditional computer languages such as C. Verilog was not standardized at that time. The language customized itself in all the versions that came out in between 1984 and 1990.Verilog simulator first used in 1985 and extended substantially through 1987. The implementation of Verilog simulator sold by Gateway. The first major extension of Verilog is Verilog-XL, which added a few features and implemented the infamous “XL algorithm” which is a very efficient method for doing gate-level simulation.

Later 1990, Cadence Design System, whose primary product at that time included thin film process simulator, decided to acquire Gateway Automation System, along with other Gateway products., Cadence now become the owner of the Verilog language, and continued to market Verilog as both a language and a simulator. During this period same, Synopsys was marketing the top-down design methodology, with Verilog, which proved to be a powerful combination.

In 1990, Cadence organized the Open Verilog International (OVI), and in 1991 gave it the documentation for the Verilog Hardware Description Language. This event “opened” the language.

7.1 Basic Concepts

7.1.1 Hardware Description Language

Two things distinguish an HDL from a linear language like “C”:


  • The ability to do several things simultaneously i.e. different code-blocks can run concurrently.


  • Ability to represent the passing of time and sequence events accordingly

7.1.2 VERILOG Introduction

  • Verilog HDL is a Hardware Description Language (HDL).
  • A Hardware Description Language is a language used to describe a digital system; one may describe a digital system at several levels.
  • An HDL may explain the layout of the transistors, resistors and wires on an Integrated Circuit (IC) chip, which is switch level.
  • It may explain the flip flops and logical gates in the digital system, which is the gate level.
  • The transfer of vectors of information between registers is explained at a higher level. It is known as Register Transfer Level (RTL).
  • All these levels are supported by Verilog.
  • A powerful feature of the Verilog HDL is that you can use the same language for describing, testing and debugging your system.

7.1.3 VERILOG Features

  • Strong Background:
  •      Supported by OVI, and standardized in 1995 as IEEE std 1364

  • Industrial support:
  • Fast simulation and effective synthesis (85% were used in ASIC foundries by EE TIMES)

  • Universal :
  • Allows entire process in one design environment (including analysis and verification)

  • Extensibility :
  •      Verilog PLI that allows for extension of Verilog capabilities

7.1.4 Design Flow

The typical design flow is shown in figure 7.1, Design Specification

  • Specifications are written first-Requirement/needs about the project
  • Describe the functionality overall architecture of the digital circuit to be designed.
  • Specification: Word processor like Word, Kwriter, AbiWord and for drawing waveform use tools like wave former or test bencher or Word. RTL Description

  • Conversation of Specification in coding format using CAD Tools. Coding Styles

  • Gate Level Modeling
  • Data Flow Modeling
  • Behavioral Modeling
  • RTL Coding Editor :Vim, Emacs, context, HDL Turbo Writer Functional Verification &Testing

  • Comparing the coding with the specifications.
  • Testing the Process of coding with corresponding inputs and outputs.
  • If testing fails - once again check the RTL Description.
  • Simulation:Modelsim, VCS, Verilog-XL,Xilinx. Logic Synthesis

  • Conversation of RTL description into Gate level -Net list form.
  • Explanation of the circuit in terms of connections and gates.
  • Synthesis:Design Compiler, FPGA Compiler, Synplify Pro, Leonardo Spectrum, Altera and Xilinx.

Logical Verification and Testing

  • Functional Checking of HDL coding by simulation and synthesis. If fails -check the RTL description. Floor Planning Automatic Place and Route

  • Creation of Layout with the corresponding gate level Net list.
  • Arrange the blocks of the net list on the chip
  • Place & Route: For FPGA use FPGA' vendors P&R tool. ASIC tools need costly P&R tools like Apollo. Students can use LASI, Magic Physical Layout

  • Physical design is the process of transforming a circuit description into the physical layout, which describes the position of cells and routes for the interconnections between them. Layout Verification

  • Verifying the physical layout structure.
  • If any modification -once again check Floor Planning Automatic Place and Route and RTL Description. Implementation

  • Final stage in the design process.
  • Implementation of coding and RTL in the form of IC.

7.1.5 Design Hierarchies up Design

The Traditional method of electronic design is bottom up. Each design is performed at the gate level using the standard gates. With increasing complexity of new designs this approach is nearly impossible to maintain. It gives way to new structural, hierarchical design methods. Without these new design practices it would be impossible to handle the new complexity Top-Down Design

A real top down design allows early testing, easy change of different technologies, and a structured system design and offers many other advantages. It is very tough to track a pure top-down design. Most designs are mix of both the methods because of this fact. Implementing some key elements of both design styles

7.2 Modules

A module in Verilog consists of distinct parts as shown in figure 7.7 A module definition always begins with the keyword module. In a module definition “the module name, port declarations, port list and optional parameters” should come first. Port list and port declarations are available only if the external environment interacts with the module with such ports. The five components within a module are;

  • variable declarations,
  • dataflow statements
  • instantiation of lower modules
  • behavioral blocks
  • tasks or functions.

In the module definition these components can be at any place and in any order, but it should finish with the end module statement. Except module, all components end module and module name are optional and could be mixed and matched with respect to design needs. Multiple modules could be defined in a single file in Verilog. All these modules could be defined in any order in the file.

7.2.1 Instances

A module provides a template from which you can create actual objects. Verilog creates a distinctive object from the template, when a module is invoked. Every object has its own name, variables, parameters and I/O interface. The process of creating objects from a module template is called instantiation, and the objects are called instances. In Example below, the top-level block creates four instances from the T-flip-flop (T_FF) template. Every T_FF instantiates an inverter gate and a D_FF. Each Instance must be given a unique name.

7.3 Ports

Ports provide the interface by which a module can communicate with its environment. For example, the input/output pins of an IC chip are its ports. The environment can interact with the module only through its ports. The internals of the module are not visible to the environment. This provides a very powerful flexibility to the designer. The internals of the module can be changed without affecting the environment as long as the interface is not modified. Ports are known as terminals as well.

7.3.1 Port Declaration

7.3.2 Port Connection Rules

One can visualize a port as consisting of two units, one unit that is internal to the module another that is external to the module. The external and internal units are linked. There are rules governing port connections when modules are instantiated within other modules. The Verilog simulator complains if any port connection rules are violated. These rules are summarized in Figure 7.8 Port Connection Rules Port Connection Rules


  • Internally must be of net data type (e.g. wire)
  • Externally the inputs may be connected to a reg or net data type


  • Internally may be of net or reg data type
  • Externally must be connected to a net data type


  • Internally must be of net data type (tri recommended)
  • Externally must be connected to a net data type (tri recommended)

7.3.3 Ports Connection to External Signals

There are two methods of making connections between signals specified in the module instantiation and ports in a module definition. Both these methods can't be mixed.

  • Port by order list
  • Port by name Port by order list

Connecting port by order list is the most intuitive method for most beginners. The signals to be connected must appear in the module instantiation in the same order as the ports in the ports list in the module definition. Port by name

For larger designs where the module have, say 5o ports, remembering the order of the ports in the module definition is impractical and error prone. Verilog provided the capability to connect external signals to ports by the port names, rather than by position.

7.4 Modelling Concepts

Verilog is both a behavioral and a structural language. Internals of each module to be defined at four levels of abstraction, depending on the needs of the design. Irrespective of the level of abstraction at which the module is explained it behaves identically with the external environment. The internals of the module are hidden from the environment. Thus, the level of abstraction to describe a module can be changed without any change in the environment. The levels are defined below

7.4.1 Behavioural or algorithmic level

This is the highest level of abstraction provided by Verilog HDL. A module can be implemented in terms of the desired design algorithm without concern for the hardware implementation details. Designing at this level is very similar to C programming.

7.4.2 Dataflow level

At this level the module is designed by specifying the data flow. The designer is aware of the data processing in the design and the data flow between hardware registers.

7.4.3 Gate level

The module is implemented in terms of logic gates and interconnections between these gates. Design at this level is similar to describing a design in terms of a gate-level logic diagram.

7.4.4 Switch level

This is the lowest level of abstraction provided by Verilog. A module can be implemented in terms of storage nodes, switches, and the interconnections between them. Design at this level requires knowledge of switch-level implementation details. All four levels of abstractions in a design can be mixed and matched by the designer in Verilog. In the digital design community, the term register transfer level (RTL) is frequently used for a Verilog description that uses a combination of behavioral and dataflow constructs and is acceptable to logic synthesis tools.If a design contains four modules, Verilog allows each of the modules to be written at a different level of abstraction. As the design matures, most modules are replaced with gate-level implementations.

Normally, the higher the level of abstraction, the more flexible and technology Independent the design. As one goes lower toward switch-level design, the design becomes technology dependent and inflexible. A small modification can cause a significant number of changes in the design. Comparing the analogy with C programming and assembly language programming. It is easier to program in higher-level language such as C. The program can be easily ported to any machine. However, if the design at the assembly level, the program is specific for that machine and cannot be easily ported to another machine.

7.5 Gate Level Modelling

Verilog has built in primitives like gates, transmission gates, and switches. These are rarely utilised in RTL Coding, in post synthesis world they are for modelling the ASIC/FPGA cells; then these cells are utilised for gate level simulation. Verilog gate level primitives also contain the output netlist format from the synthesis tool, which is imported into the place and route tool.

7.5.1 Gate Types

A logic circuit can be designed by use of logic gates. Verilog supports logic gates as predefined primitives. Theses primitives are instantiated like modules except that they are predefined in Verilog and do not need a module definition. All circuit can be designed by using basic gates. There are two classes of basic gates: and/or gates and but/not gates. And/Or Gates

and/or gates have one scalar output and multiple scalar inputs. The other terminals are inputs the first terminal in the list is an output. As soon as one of the inputs changes the output of a gate is evaluated. The and/or gates in Verilog are shown below in 7.9.

and orxor


These gates are instantiated to build logic circuits in Verilog. Examples of gate Instantiations are shown below. In the below example, for all instances, OUT is connected to the output out, and IN1 and IN2 are connected to the two inputs i1 and i2 of the gate primitives.

7.5.2 Gates symbol

The instance name does not need to be specified for primitives. More than two inputs can be specified in gate instantiation Gates with more two inputs are instantiated by simply adding more ports in the gate instantiation. Verilog automatically instantiates the appropriate gate.

7.6 Behavioural & RTL Modelling

Verilog provides designers the ability to describe design functionality in an algorithmic manner. In other words, the behaviour of the circuit is described by the designer. Hence, the circuit is represented at a very high level of abstraction in behavioural modelling. In many ways behavioural Verilog constructs are identical to C language constructs. Verilog provides the designer with a great amount of flexibility as it is rich in behavioural constructs that.

7.6.1 Operators

Verilog provided many different operators types. Operators can be,

  • Arithmetic Operators
  • Relational Operators
  • Bit-wise Operators
  • Logical Operators
  • Reduction Operators
  • Shift Operators
  • Concatenation Operator
  • Replication Operator
  • Conditional Operator
  • Equality Operator

7.7 Data Flow Modelling & RTL

For small circuits, the gate-level modeling approach works very well because the number of gates is limited and the designer can instantiate and connects every gate individually. With a basic knowledge of digital logic design gate-level modeling is very intuitive to a designer. The number of gates is very large in complex designs. Hence, if the designers can concentrate on implementing the function at a level of abstraction higher than gate level, they can design effectively. A powerful way to implement a design is Dataflow modeling. Rather than instantiation of individual gates Verilog enables a circuit to be designed in terms of the data flow among registers and how a design processes data.

As logic synthesis tools have become more sophisticated, dataflow modeling has become a popular design approach. This method enables the designer to focus on optimizing the circuit design in terms of data flow. For maximum flexibility in the design process, designers typically use a Verilog description style that combines the concepts of gate-level, dataflow, and behavioral design. The word RTL (Register Transfer Level) design is normally used for a mixture of behavioural modeling and dataflow modeling in digital design community.

7.8 Continuous Assignment Statements

A continuous assignment is the most basic statement in dataflow modeling, used to drive a value onto a net. A continuous assignment replaces gates in the description of the circuit and describes the circuit at a higher level abstraction. A continuous assignment statement starts with the keyword assign. They represent structural connections.

  • Continuous assignment statements are utilised for modeling Tri-State buffers.
  • Continuous assignment statements are utilised for modeling combinational logic.
  • Continuous assignment statements are exterior to the procedural blocks.
  • Any procedural assignments are over-ridden by continuous assign.
  • The net data type must be on the left-hand side of a continuous assignment

Syntax: assign (strength, strength) #(delay) net = expression;

7.9 ModelSim

ModelSim is a verification and simulation tool for VHDL, Verilog, System Verilog, and mixed language designs.

7.9.1 Basic Simulation Flow

  • Creating the Working Library
  • In ModelSim, all designs are compiled into a library. You typically start a new simulation in ModelSim by creating a working library called “work”. “Work” is normally the library name which is used by the compiler as the default location for compiled design units.

  • Compiling Your Design
  • After creating the working library, you compile your design units into it. The ModelSim library format is compatible across all supported platforms. You can simulate your design on any platform without having to recompile your design.

  • Loading the Simulator with Your Design and Running the Simulation
  • With the design compiled, you load the simulator with your design by invoking the simulator on a top-level module (Verilog) or a configuration or entity/architecture pair (VHDL).

    Assuming the design loads successfully, the simulation time is set to zero, and you enter a run command to begin simulation.

  • Debugging Your Results
  • If you don't get the results you expect, you can use ModelSim's robust debugging environment to track down the cause of the problem.

7.10 Project Flow

A project is a collection mechanism for an HDL design under specification or test. Even though you don't have to use projects in ModelSim, they may ease interaction with the tool and are useful for organizing files and specifying simulation settings. The following diagram shows the basic steps for simulating a design within a ModelSim project.

As you can see, the flow is similar to the basic simulation flow. However, there are two important differences:

  • You do not have to create a working library in the project flow; it is done for you automatically.
  • Projects are persistent. In other words, they will open every time you invoke ModelSim unless you specifically close them.

7.11 Debugging Tools

ModelSim offers numerous tools for debugging and analyzing your design. Several of these tools are covered in subsequent lessons, including:

  • Using projects
  • Working with multiple libraries
  • Setting breakpoints and stepping through the source code
  • Viewing waveforms and measuring time
  • Viewing and initializing memories
  • Creating stimulus with the Waveform Editor
  • Automating simulation

7.12 Basic Commands of ModelSim

Adding the signals to the waveform window

  • First compile the top-level entity and the associated testbench.
  • Go to Simulate-Start Simulation, select the testbench from dropping down “work”.
  • In the Console window type the following command add wave -r *

This command should open a waveform window, with all the signals already added to the window.

7.12.1 Viewing the Waveforms

After using the above command, if the waveform window doesn't appear on screen or is hidden behind other windows, use the following command to view the waveforms.

  • View wave
  • Running Simulations

The following command can be used to run the simulation


run 100 or run 100 ns would run the simulation for 100 ns.

run 1 us would run the simulation for 1 microsecond

use ps for picoseconds and s for seconds

7.13 About Xilinx ISE Tool

Overview of the Xilinx ISE project navigator

Xilinx ISE (integrated software environment) controls all aspets of the development flow. Project navigator is a graphical interface for users to aess software tools and relevant files associated with the project. We use it to launch all development tasks except modelsim simulation.

Source window: hierarchically displays the files included in the project

Process window: displays available processes for the source file currently selected

Transcript window: displays status messages, errors and warnings

Workplace window: contains multiple document windows (such as HDL code, report, schematic, and son) for viewing and editing

Each subwindow may be resized, moved, docked, or undocked. The default layout can be restored by selecting view> restore.

Xilinx ISE project navigator:

Xilinx ISE consists of an array of software tools; we illustrate the basic development process. Four major steps include:

  1. Create the project design and HDL codes.
  2. Create a test bench and perform RTL simulation
  3. Adding a constraint file and to synthesize and implement the code.
  4. Generate and download the configuration file to an FPGA device.

7.14 Overview of ISE Tool

ISE controls all aspects of the design flow. Through the Project Navigator interface, can access all of the design entry and design implementation tools. You can also access the files and documents associated with your project. Project Navigator maintains a flat directory structure; therefore, maintain revision control through the use of snapshots.

7.14.1 Project Navigator Interface

The Project Navigator Interface is divided into four main sub windows, as seen in below figure. On the top left is the Sources window which hierarchically displays the elements included in the project. Beneath the Sources window is the Processes window, which displays available processes for the currently selected source. The third window at the bottom of the Project Navigator is the Transcript window which displays status messages, errors, and warnings and also contains interactive tabs fo Tcl scripting and the Find in Files function. The fourth window to the right is a multi-document interface (MDI) window refered to as the Workspace. It enables you to view html reports, ASCII text files, schematics, and simulation waveforms. Each window may be resized, undocked from Project Navigator or moved to a new location within the main Project Navigator window. The default layout can always be restored by selecting View > Restore Default Layout. These windows are discussed in more detail in the following sections.

7.14.2 Sources Window

This window consists of three tabs which provide information for the user. Each tab is discussed in further detail below.

7.14.3 Sources Tab

The Sources tab displays the project name, the specified device, and user documents and design source files associated with the selected Design View. The Design View (“Sources for”) drop-down list at the top of the Sources tab allows you to view only those source files associated with the selected Design View, such as Synthesis/Implementation standards.

7.14.4 Devices in the Spartan-3 Subfamily

Even though Spartan-3 FPGA devices has similar types of logic cells and macro cells, their densities differ. Each subfamily contains an array of devices of various densities.

7.14.5 Macro Cell

The Spartan-3 device contains four types of macro blocks: combinational multiplier, block RAM, digital clock manager (DCM), and input/ output block (IOB). The combinational multiplier accepts two 18-bit numbers as inputs and calculates the product. The block RAM is an 18k-bit synchronous SRAM that can be arranged in various types of configurations. A DCM uses a digital-delayed loop to reduce clock skew and to control the frequency and phase shift of a clock signal. An IOB controls the flow of data between the device's I/O pins and the internal logic. It can be configured to support a wide variety of I/O signalling standards.

7.15 Development Flow

The simplified development flow of an FPGA-based system is shown in below figure, to facilitate further reading, we follow the terms used in Xilinx documentation. The left portion of the flow is the refinement and programming process, in which a system is transformed from an abstract textual HDL description to a device cell-level configuration and then downloaded to the FPGA device. The right portion is the validation process, which checks whether the system meets the functional specification and performance goals. The major steps the flow are:

  1. Design the system and derive the HDL files(s). We may need to add a separate constraint file to specify certain implementation constraints.
  2. Develop the test bench in HDL and perform RTL simulation. The RTL term reflects the fact that the HDL code is done at the register transfer level.
  3. Perform synthesis and implementation. The synthesis process is generally know as logic synthesis, in which the software transforms the HDL constructs to generic gate level components, such as simple logic gates and FFs. The implementation process consists of three smaller processes: translate, map, and place and route.
  4. Translate: The translate process merges multiplies design files to a single netlist. MAP: This process which is generally known as technology mapping, maps the generic gates in the netlist to FPGAs logic cells and IOBs.

    PLACE and ROUTE PROCESS: which is generally known as placement and routing, it derives the physical layout inside the FPGA chip. It places the cells in physical locations and determines the routes to connect various signals. In Xilinx flow, static timing analysis, this static timing analysis determines various timing parameters, such as maximal propagation delay and maximal clock frequency, this is performed at the end of the implementation process

  5. Now at last generate and download the programming file. In this process, a configuration file is generated according to the final netlist. The file is downloaded to an FPGA device serially to configure the logic cells and switches. The physical circuit can verified consequently. The optional functional simulation can be performed after synthesis, and the optional timing simulation can be performed after implementation. Functional simulation uses a synthesized netlist to replace the RTL description and checks the corrections of the synthesis process. Timing simulations uses the final netlist, along with detailed timing data, to perform simulation. Because of the complexity of the netlist, functional and timing simulation may require a significant amount of time. If we follow good design and coding practices, the HDL code will be synthesized and implemented correctly. We only need to use RTL simulation to check the correctness of the HDL ode and use static timing analysis to examine the relevant timing information. Both functional and timing simulations can be omitted from the development flow.

7.15.1 Snapshots Tab

The Snapshots tab displays all snapshots associated with the project currently open in Project Navigator. A snapshot is a copy of the project including all files in the working directory, and synthesis and simulation sub-directories. A snapshot is stored with the project for which is was taken, and the snapshot can be viewed in the snapshots tab.

7.15.2 Libraries Tab

The Libraries tab displays all libraries associated with the project open in Project Navigator.

7.15.3 Processes Window

This window contains one default tab called the Processes tab.

7.15.4 Processes Tab

The Processes tab is context sensitive and changes based upon the source type selected in the Sources tab and the Top-Level Source Type in your project. From the Processes tab, run the functions necessary to define, run and view your design. The Processes tab provides access to the following functions:

  • Add an Existing Source
  • Create New Source
  • View Design Summary
  • Design Entry Utilities

Provides access to symbol generation, instantiation templates, HDL Converter, View command line Log File, and simulation library compilation.

7.15.5 User Constraints

Provides access to editing location and timing constraints.

7.15.6 Synthesis

Provides access to Check Syntax, Synthesis, View RTL or Technology Schematic, and synthesis reports.

7.15.7 Implement Design

Provides access to implementation tools, design flow reports, and point tools.

7.15.8 Generate Programming File

Provides access to the configuration tools and bit stream generation. The Processes tab incorporates automake technology. This enables the user to select any process in the flow and the software automatically runs the processes necessary to get to the desired step. For example, when run the Implement Design process, Project Navigator also runs the Synthesis process because implementation is dependent on up-todate synthesis results.

7.15.9 Transcript Window

The Transcript window contains five default tabs: Console, Errors, Warnings, Tcl Console, Find in Files.

7.15.10 Console

Displays errors, warnings, and information messages. Errors are signified by a red (X) next to the message, while warnings have a yellow exclamation mark (!).

7.15.11 Warnings

Displays only warning messages. Other console messages are filtered out.

7.15.12 Errors

Displays only error messages. Other console messages are filtered out.

7.15. 13 Tcl Console

Is a user interactive console. In additions to displaying errors, warnings and informational messages, the Tcl Console allows a user to enter Project Navigator specific Tcl commands. For more information on Tcl commands, see the ISE Help.

7.15.14 Find in Files

Displays the results of the Edit > Find in Files function.

7.16 Workspace

7.16.1 Design Summary

The Design Summary lists high-level information about project, including overview information, a device utilization summary, and performance data gathered from the Place & Route (PAR) report, constraints information, and summary information from all reports with links to the individual reports.

7.16.2 Text Editor

Source files and other text documents can be opened in a user designated editor. The editor is determined by the setting found by selecting Edit > Preferences, expand ISE General and click Editor. The default editor is the ISE Text Editor. ISE Text Editor enables to edit source files and user documents. You can access the Language Templates, which is a catalog of ABEL, Verilog and VHDL language, and User Constraints File templates that you can use and modify in your own design.

7.16.3 ISE Simulator / Waveform Editor

ISE Simulator / Waveform Editor is a test bench and test fixture creation tool integrated in the Project Navigator framework. Waveform Editor can be used to graphically enter stimuli and the expected response, then generate a VHDL test bench or Verilog test fixture.

7.16.4 Schematic Editor

The Schematic Editor is integrated in the Project Navigator framework. The Schematic Editor can be used to graphically create and view logical designs.



This chapter review the major contributions of this thesis and discusses some directions for future research

8.1 Dissertation Contributions

Most of the test engineers and researchers are trying to discover extremely appropriate design of Iterative circuits schemes to maximise the test methodology. Usually, most of the inventions are providing enormous results and a number of well-known advantages, but always there is a question or the most important matter rose in development of designing of Iterative circuits; the reliability of iterative circuits. In this chapter, a brief review of the contributions of the dissertation is presented.

In chapter 2, a review of iterative circuits and its investigated, as is well known, iterative networks were widely used in the early days of switching systems when relays were the major means of realizing logic circuits. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with large numbers of parallel inputs. The method is also directly applicable to the design of VLSI circuits and has the advantage of producing a modular structure based on a standard cell which may be optimized independently in terms of layout etc.

In chapter 3, an overview of design methods for iterative circuits are discussed quoted as a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Changes and refinements are made, in the most recent iteration of a design, based on the results of testing. Iterative design process might be applicable in the entire new product development process. In the early stages of development changes are easy and affordable to implement. In the iterative design process the first is to develop a prototype. And also mentioned about the classification of iterative circuits with an example.

8.2 Testability of Iterative Circuits

The increase in the complexity of the integrated circuits and the inherent increase in the cost of the test carried out on them are making it necessary to look for ways of improving the testability of iterative circuits. However the results can be extended to stable class of bilateral circuits. Kautz proposed the cell fault model (CFM) which was adopted my most researchers in testing ILAs. As assumed by CFM only one cell can be faulty at a time. As long as the cell remains combinational, the output functions of the faulty cell could be affected by the fault.

In chapter 4, the design of iterative building blocks are investigated through different binary arithmetic circuits as it is a combinatorial problem. For the binary arithmetic, it may seem insignificant to use the methods which we have already seen for designing combinatorial circuits to obtain circuits. But the problem persists with this so the general method to create these kinds of circuits would use too many gates. We must look for different routes. As well as the coding was also developed with the building blocks.

In chapter 5, investigation on need for testing is made stating that as the density of VLSI products increases, their testing becomes more difficult and costly. Generating test patterns has shifted from a deterministic approach, in which a testing pattern is generated automatically based on a fault model and an algorithm, to a random selection of test signals. While most problems in VLSI design has been reduced to algorithm in readily available software, the responsibilities for various levels of testing and testing methodology can be significant burden on the designer. Manufacturing Test Principles A critical factor in all LSI and VLSI design is the need to incorporate methods of testing circuits. This task should proceed concurrently with any architectural considerations and not be left until fabricated parts are available. Optimal Testing With the increased complexity of VLSI circuits, testing has become more costly and time-consuming,

In chapter 6, these chapter discusses about the Design and Implementation of Circuits on FPGA, based on VLSI Design so states that the design and implementation is based on two types ASIC and FPGA but in detail we discuss about the FPGA Historical Background and Review on FPGA states that In some FPGAs, two I/O cells can be combined to support differential pair I/O standards. A trend in FPGAs is to include cores for specialized functions such as single-port and dual-port RAMs, first-in-first-out (FIFO) memories, multipliers, and DSPs. Within any given FPGA, all memory cores are usually of the same size in terms of the total number of memory bits, but each memory cores in the arrays is individually programmable. We also discuss about the Implementation Styles, and example on FPGA, as well as Benefits of FPGA Technology, and also aroused a question about why choosing an FPGA, and the Software Development in FPGA technology and finally we end this chapter this Testing on FPGA.

In chapter 7, A Review on Verilog was stated and also Basic concepts and also mentioned about the Hardware Description Language quotes that Two things distinguish an HDL from a linear language like “C”: Concurrency and Timing, as well as the introduction to Verilog was also discussed, combining with these discussion was also made on Features, Design Specification, Design Styles, Functional Verification and Testing, Logic synthesis, Floor Planning Automatic Place and Route, Physical Layout, Design Hierarchies, Modules, Instances, ports, Behavioural & RTL Modelling, Modelsim, Project flow, Commands, Graph Verifying, and finally discussed about the Xilinx ISE Tools, in this Xilinx ISE section we will clearly understand about the how to create a project, synthesis and implementation and finally errors, warnings and many more are found, in ISE simulator or waveform editor ISE Simulator / Waveform Editor is a test bench and test fixture creation tool integrated in the Project Navigator framework. Waveform Editor can be used to graphically enter stimuli and the expected response, then generate a VHDL test bench or Verilog test fixture. At last the chapter ends with the Schematic Editor that is integrated in the Project Navigator framework. The Schematic Editor can be used to graphically create and view logical designs



  • Rubio, R. Anglada, J. Figueras (1989), “Easily Testable Iterative Uni-dimensional CMOS Circuits”, European Test Conference, Volume. 12, Issue 14, p. 240 - 245.
  • Barry Wilkinson and Rafic Makki (1992), Digital System Design, 2nd Ed, Prentice Hall.
  • B. Holdsworth (1993), Digital Logic Design, 3rd Ed, Butterworth-Heinemann.
  • Brian Holdsworth and Clive Woods (2002), Digital Logic Design, 4th Ed, Newnes Publications.
  • D. D. Givone and R. P. Roesser (2006), “Multidimensional Linear Iterative Circuits—General Properties”, IEEE Journal on Computers, Volume: C-21, Issue: 10, p. 1067- 1073.
  • Douglas Lewin (1974), Logical Design of Switching Circuits, 2nd Ed, Thomas Nelson & Sons Ltd.
  • Douglas Lewin and David Protheroe (1994), Design of Logic Systems, 2nd Ed, Chapman & Hall.
  • Fredrick J. Hill and Gerald R. Peterson (1981), Introduction to Switching Theory & Logic Design, 3rd Ed, John Wiley & Sons, Inc.
  • H.Charles Roth, and L. Larry (2004) Kinney Fundamentals of logic design, 6th Ed, CENGAGE LEARNING.
  • Hassan A. Farhat (2004), Digital design and computer organization, CRC Press LLC.
  • John F. Walkerly (2006), Digital Design: Principles & Practices, 4th Ed, Prentice Hall.
  • John V. Oldfield and Richard C. Dorf (1995), Field Programmable Gate Arrays: Reconfigurab;e Logic for Rapid Prototyping and Implementation of Digital Systems, John Wiley & Sons, Inc.
  • Laung-Terng Wang, Charles E. Stroud, Nur A. Touba (2008), System-on-chip test architectures: nanometer design for testability, Elsevier Inc.
  • M. Morris Manoand Charles R. Kime (2004), Logic and Computer Design Fundamentals, 3rd Ed, Prentice Hall.
  • Pong P. Chu (2008), FPGA prototyping by VHDL examples: Xilinx Spartan, 3 version, John Wiley & Sons Inc.
  • R. C. Seals & G. F. Whapshott (1997), Programmable Logic: PLDs & FPGAs, 1st Ed, Macmillan Press Ltd.
  • Wai-Kai Chen (2003), VLSI technology, CRC Press LLC.

9.1 Electronic References

  • (Accessed on 03/09/09).
  • (Accessed 15/09/09).
  • (Accessed on 15/09/09)
  • (Accessed on 20/09/09).
  • on 20/09/09).
  • (Accessed on 24/09/09).
  • (Accessed on 24/09/09)
  • (Accessed on 24/09/09).
  • (Accessed on 24/09/09).
  • (Accessed on 24/09/09).
  • (Accessed on 25/09/09).
  • (Accessed on 25/09/09).
  • (Accessed on 25/09/09).
  • (Accessed on 27/09/09).
  • (Accessed on 27/09/09).
  • (Accessed on 29/09/09).
  • (Accessed on 30/09/09).
  • (Accessed on 05/10/09).
  • (Accessed on 05/10/09).
  • (Accessed on 07/10/09).
  • (Accessed on 07/10/09).
  • on 14/10/09).
  • (Accessed on 14/10/09).
  • (Accessed on 25/10/09).
  • (Accessed on 25/10/09).
  • (Accessed on 25/10/09).
  • (Accessed on 25/10/09).
  • (Accessed on 25/10/09).
  • (Accessed on 25/10/09).


10. Appendix

10.1 ADDER x1

10.2 ADDER x2

10.3 Adder Sum

10.4 Carry