Vlsi Floorplanning Based On Evolutionary Algorithm Biology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This paper presents a Hybrid particle swarm optimization algorithm for floorplanning optimization. Here B*tree is used at the initial stage in order to avoid overlapping of modules and later, PSO algorithm along with the concept of crossover and mutation from Genetic algorithm is used to get optimal placement solution. The main objective of floorplanning is to minimize the chip area, interconnection wire length and to obtain the thermal balanced design. While optimizing the chip area and wire length, the high power density blocks are tightly packed together, which will cause uneven temperature distribution. So for each block the temperature is calculated using the tool HOTSPOT which measures the temperature based on block dimensions and power dissipation. The Experimental results on Microelectronic Center of North Carolina (MCNC) benchmark circuits shows that our algorithm performs better convergence than the other methods.

Keywords- Hybrid particle swarm optimization(HPSO), Genetic algorithm(GA), Crossover, Mutation,Microelectronic Center of North Carolina (MCNC),Very Large Scale Integrated Circuits(VLSI)


As the demand for latest technology surge in the market, more research has been carried out in the field of IC design, which in turn makes VLSI design developed with more composite, reliable, compact and better performance. As the number of transistors in a single chip is countless, the IC design has become much more complex. A more sophisticated approach to the above problem is Floorplanning, which is an important physical design step for hierarchical, building-block design methodology. When the circuit size get increases, the solution space also increases simultaneously which make it impossible to find the global solution space. The HPSO algorithm can be adopted to solve floorplanning problem.

Recently, many researchers resort to stochastic optimization algorithms based on slicing [12,15] and non-slicing floorplan [1,6,9,4]. In [2][3], bounded slicing grid and sequence pair approaches were used to represent non-slicing floorplan with smaller solution space. Both these approaches use constraint graph to manipulate the transformation between the representation and their placement. But it is more complicated. To overcome the above problem pei-Ning Guo et al.,[4] proposed an O-tree representation based on non-slicing floorplan. Experiment results shows that this representation achieves minimum wire length and very less dead space. Yun-Chih Chang et al., [1] present an efficient, flexible and effective data structure, B*tree based Simulated annealing schemes for representing non-slicing floorplans. This representation runs about 4.5 times faster than the O-tree,and consumes about 60% less memory and also achieves near optimum area utilization even for rectilinear modules Finally, like O-tree, the size of the search space of the floorplanning problem by the B*-tree representation is only O(n!22n−2/n1.5 ) [3], while it is O((n!)2) by the sequence pair representation [8]. Hence, the B*-tree representation is adopted in this paper.

Since VLSI floorplanning is NP-hard, different heuristic methods have been proposed. They are classified into two methods: the constructive method and the iterative method. The constructive method use heuristic strategy to construct a floorplan The bottom-left (BL) and BL fill methods [14], [15] are the most common constructive approaches. Huang et al. [19] presented a very effective heuristic algorithm, in which two important concepts, namely, the corner-occupying action and caving degree, were introduced to guide the packing process. The iterative method use metaheuristic approach such as simulated annealing, tabu search, genetic algorithm and particle swarm optimization algorithm to produce final best solution. Jianli Chen et al.,[5] proposed a new heuristic method that uses a hybrid simulated annealing to represent non-slicing floorplan. In [20], a genetic algorithm was applied to tackle the VLSI floorplanning problem. In [22] S. Anand et al., proposed a new algorithm named as Simulated Spheroidizing Annealing Algorithm (SSAA) which is more suitable for problems of larger size floorplans. In [6] W-L Hung proposed genetic algorithm based thermal-aware floorplanning. This is the first paper considered thermal-aware floorplanning but it consider only one constraint area and it obtains the final fittest solution.

The particle swarm optimization (PSO),was developed by James Kennedy and Russell Eberhart in 1995. It is an stochastic optimization technique based on the movement and swarm intelligence.PSO is initialized with the population of random solution and it applies the concept of social interaction to problem solving. Unlike most of other population-based evolutionary algorithms, PSO is motivated by the simulation of social behaviour instead of the survival of the fitness. The advantages of PSO are simplicity in implementation and its ability of convergence is very quick.(Sun et al. 2006) originally introduces PSO into the floorplanning problem. The paper adopts the B*-tree floorplanning structure (Chang et al.

2000) to generate an initial stage with overlap free for placement and utilizes PSO to find out the potential optimal placement solution. However, no implementation detail of the algorithm is mentioned, and only the area optimization is considered. It is unable to solve problems that optimize the area and wire length simultaneously, thus it is not an effective PSO algorithm for general floorplanning problems. To overcome the above problem hybrid particle swarm optimization was introduced in which both area and wire length is considered and optimized results were obtained.

The rest of this paper is organized as follows. Section 2 describes the floorplanning problem. Section 3 depicts the mechanism of PSO. The details of the PSO-based intelligent decision algorithm and simulation results are shown in Sects. 4 and 5. A conclusion is drawn in Sect. 6.


Data Structure

Let the set of blocks be A={a1,a2….aN} where N is a number of modules. Each block ai,where 1<=i<=N, is rectangular in shape with width Wi and height Hi. The modules are placed in such a way that no two modules overlap with each other. The aspect ratio for each module is calculated using Wi/Hi. There are two types of modules in floorplanning.

Hard module: The module's whose shape (i.e. width and height) is fixed, and is denoted as (W, H) where W is the width and H is the height of the module.

Soft module: The module's whose width (W) and height (H) can be varied as long as the aspect ratio is within the given range but the area (A) is fixed. In this paper we used hard module approach to optimize floorplan.























Fig.1 Slicing floorplan and its slicing tree a) Slicing floorplan b) Slicing tree

B. Floorplan Structure

Floorplan can be represented in two ways, namely, slicing and non-slicing floorplan. The slicing structure can be bisected horizontally or vertically on recursive basis till each part contains only one module. Non-slicing floorplan cannot be bisected in this manner. In fig 1(a) letter denotes modules and number denotes horizontal and vertical cut division. Fig.2 shows a non-slicing floorplan and the number in the figures denotes the number of modules in the floorplan. This paper considered non-slicing based floorplan representation.






Fig.2 Non-slicing floorplan


This paper presented B* tree based representation at the initial stage to avoid overlapping of modules. The main advantage of using B*tree is it handles non-slicing structure which is flexible to deal with hard modules in o(n) times. The procedure to construct B* tree is similar to the depth first search (DFS). while constructing B* tree, the first priority is given to construct the left sub tree followed by right sub tree in repetitive fashion, with its root representing the bottom left corner in the floorplan and thus the coordinate of the module is (xroot, yroot)=(0,0). Let ni be the root node, and nj be either left or right node . If node nj is the left child of root node ni, module j is placed on the right hand side which is adjacent to the module i, i.e., nj=ni+wi, where wi is the width of the module i. similarly if node nj is the right child of root node ni, module j is placed above the corresponding root node with the constraint xi=xj

Fig. 3 (a) Admissible placement (b) B*-tree representing the placement

Since n0 is the root node in fig 3(b), the module b0 is placed on the bottom left corner. Then based on the DFS procedure n7 is the left child of n0, so the module b7 is placed on the right hand side of b0.Since there is no left sub-tree exist after the node n7,it starts to construct right sub-tree of node n7 (i.e) n8.This procedure continues until all the nodes in the tree are constructed in the recursive manner. The above construction takes only linear time


Particle Swarm Optimization

Particle swarm optimization (PSO) is an optimization method that  optimizes a problem by iteratively  trying to improve a candidate solution with regard to a given measure of quality. PSO optimizes a problem by having a population of candidate solutions (from b*tree). The individuals in the population are called as particles. By using PSO algorithm the particles are moved around in the search-space according to simple mathematical formulae  over the particle's position and velocity. Each particle's movement is influenced by its local best known position and is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm towards the best solutions. The position and velocity of each particle is updated according to the following equations

Testing of VLSI circuits is performed using Automatic Test Equipment. ATE applies input stimulus to the circuit under test and measure the output response. If the ATE observes a response different from the expected response, the CUT fails that test. Three components of the VLSI testing are input patterns, CUT, and stored response as shown in Fig. 1. In VLSI circuits, the stimuli are called test patterns or test vectors. Testing typically consists of

Set of test stimuli

Circuit Under Test (CUT)

Analysing output responses

If incorrect (fail), CUT assumed to be faulty

If correct (pass), CUT assumed to be fault-free

ATPG is a method used to find an input (or test vectors) sequence that, when applied to a digital circuit, enables tester to distinguish between the correct behavior and the faulty circuit behavior caused by defects. The efficiency of this method is measured by using fault coverage, computational time and the length of test set.

Kanad Basu and Prabhat Mishra [4] had developed a bitmask selection technique for test data compression in order to create maximum matching patterns; also they developed a dictionary selection method which takes into account the bitmask based compression. Subsets of primary input vectors of limited sizes were used [5] during testing for target faults. Chia-Yi Lin, Hsiu-Chuan Lin, and Hung-Ming Chen [2] proposed a technique for sequential circuit testing based on the selective test pattern compression. This method can reduce considerable shift-in power by skipping the switching signal passing through long scan chains. Irith Pomeranz and Sudhakar M. Reddy [1] had proposed static Test Data Volume Reduction algorithm in which the compacted test vectors were obtained fault simulating each fault and removing the redundant test vectors which is a complex process for large circuits.

Input Test Stimuli

Circuit under test

Output Response Analysis


Pass/ Fail



Fig 1 VLSI circuit testing

Genetic algorithm proposed by Goldberg [13] is specially suited to solve large scale combination optimization problem. GA has been successfully applied in different areas of VLSI design, especially in testing such as test pattern generation [5].

Genetic Approach for Test Pattern Generation

Test generation using deterministic & fault oriented algorithm is highly complex and time consuming. New approaches are needed to reduce execution time. GA was used for fault simulation based test generation with conditional execution of test vector under 2 phase scheme in [7] which is used for sequential circuits. Several approaches to test generation have been proposed. In [8], [9] the fitness evaluation and population scoring is low cost and only based on the fault coverage of each test vector. The disadvantage of the technique is that if a dropping fault simulation is used, experimentally after almost 10 generation, the generated vectors stop detecting remaining faults. This method has resulted in better final test set, but it is very expensive. A new operator is used in [8], in which, after each generation, the best vector in population is put on the final test set and then rescored with a new decreasing fitness.

Genetic Algorithm Introduction

The most popular technique in evolutionary computation is the genetic algorithm. Holland proposed GA as a heuristic method based on "Survival of the fittest". GA was discovered as a useful tool for search and optimization problem. Space of all feasible solutions (the set of solutions among which the desired solution resides) is called search space. Each and every point in the search space represents one possible solution. Therefore each possible solution can be "marked" by its fitness value, depending on the problem definition. With Genetic Algorithm one looks for the best solution among a number of possible solutions represented by one point in the search space.

Basics of Genetic Algorithm

GA starts by generating an initial population of chromosomes. Generally, the initial population is generated randomly. Then, the genetic algorithm loops over an iteration process to produce the population [11]. Each iteration consists of the following four steps:

Selection: The first step is selecting individuals for reproduction. This selection is based on the relative fitness of the individuals so that best ones are often chosen for reproduction.

Reproduction: This is the second step where offspring's are produced from the selected individuals. For the next generation new chromosomes are produced by either crossover or mutation or both.

Evaluation: In this step the fitness of the new individuals (chromosomes or offspring's) is evaluated.

Replacement: In this last step, individuals from the old population are replaced by the new ones.

The algorithm is stopped when the population converges toward the optimal solution.

Genetic Algorithm for Test Pattern Generation

In this paper, genetic algorithm is used for test pattern generation. The test vectors or test patterns form the population or set of solution for GA. This algorithm is used to find the optimal solution with high fault coverage. A random population of n test vectors or chromosome is first generated and fitness of each individual in the population is evaluated based their fault coverage. By repeating selection, crossover, mutation and replacement, new population of chromosomes is created for next generation. The compacted or optimal test vectors generated by this algorithm had detected all the faults in the VLSI circuits. Results show that the test set size has been reduced without any degradation in the fault coverage for the benchmark circuits.

Flowchart for the genetic algorithm used in this paper is shown in the fig 2.

The steps involved in the algorithm are

1. Initial population: Random population of n chromosomes (suitable solutions for the problem) was created. Population size for each generation is chosen to be large enough in order to ensure adequate diversity. Here the individuals of the population are the test vectors for each circuit. The length of each chromosome or individual is equal to the number of primary inputs of the CUT. There is always a trade off between the convergence rate and the search space. The population size increases with increase in the length of the test vector. The table 1 shows the approximate population size for different test vectors.


population size

Test Vector Length

Population size








Generate initial population of test vectors

Evaluate the fitness of each individual by performing fault simulation with fault dropping

Optimization criteria met?


Best Individuals obtained






Generate new population

Fig 2 Flow chart of Genetic Algorithm

2. Fitness Function Evaluation: In this step the fitness function for each chromosome (test vector) in the population is evaluated. This is given by f(x). Fitness function provides a measurement for the quality of the test vector. A test vector that detects the maximum number of faults is chosen as the best test vector. When fault dropping is included then a test vector that detects maximum number of new faults is considered to be the best test vector. Based on this fitness function value the chromosomes (test vectors) were selected by the selection process for reproduction. The fitness function is problem specific. In this paper it is based on the fault coverage. The fitness function is given by

F(x) = No of faults detected by test vector

Total number of faults

The optimum solution is obtained for the maximum value of F(x).

3. Creation of New Population: New population is created by repeating the following steps selection, crossover and mutation.

Selection: In selection process two parent chromosomes from the population is selected for producing offspring's. Selection of parent chromosomes is based on their fitness value, better fitness, the bigger chance to get selected. The selected chromosomes were used in the crossover process.

There are different types of selection process. In this paper Tournament Selection scheme is used.

Tournament Selection: Tournament selects each parent by choosing individuals at random, the number of which we can specify by Tournament size, and then choosing the best individual out of that set to be a parent. The best individual from the tournament is the one with the highest fitness, which is the winner. Tournament competitions and the winner are then inserted into the mating pool. The tournament competition is repeated until the mating pool for generating new offspring is filled. The mating pool comprising of the tournament winner has higher average population fitness. This method is more efficient and leads to an optimal solution.

Crossover: Crossover is the process of considering two parents and producing offspring's from them. With a cross probability, cross over of the parents selected by the selection process is performed to produce new offspring (children). If no crossover was performed, offspring is the exact copy of parents. After the selection process, the population consists of better individuals. The three steps of the crossover operator are:

i. The reproduction operator selects at random a pair of two individual strings for the mating.

ii. A cross site is selected at random along the string length.

iii. Finally, the position values are swapped between the two strings following the cross site.

In this paper single point crossover is performed which converges much faster. In single point crossover two mating chromosomes are cut once at corresponding points and the sections after the cuts is exchanged between them to form new offspring's. Here, a cross-site or crossover point is selected randomly along the length of the parent chromosomes and bits next to the cross-sites are exchanged. Single point crossover is shown in the Fig 3.

Fig 3 Single point crossover

Mutation: After crossover, mutation is performed on the offspring's that were obtained from the crossover. Mutation prevents the algorithm from reaching the local minimum. Mutation is viewed as a background operator to maintain genetic diversity in the population. It introduces new genetic structures in the population by randomly modifying some of its building blocks. The mutation varies for different kinds of representation. For binary representation, mutation inverts the bits from '0' to '1' and '1' to '0' with a small mutation probability. The mutation is shown in fig 4.

Fig 4 Mutation

Optimization Criteria: The condition for the algorithm to stop is called the optimization or stopping criteria. In this paper the stopping criteria is the total number of faults detected, which is the sum of the faults detected by each test vector. Fault simulation is performed for each fault considered and then each test vector is evaluated based on the faults detected.

Results and discussion

The combinational circuits were taken as the CUT. These circuits were simulated in Matlab for fault coverage. The number of faults detected by each test vector to the total number of faults injected in the circuit gives the fault coverage for each test vector. 20 faults were introduced to the ISCAS'85 combinational benchmark circuit c17. Since it is a 5 input circuit, the total number of test vectors was 32. The Genetic algorithm was applied and found that out of 32 test vectors only 5 test vectors were enough to detect all the injected faults. For the 1 bit adder which is 3 input circuit, has 8 test vectors. 22 faults were introduced to this circuit. GA was applied and found that only 4 test vectors were enough to detect all the faults.

1. Experimental result shows that the test pattern size has been reduced up to 58% in comparison with other methods static TDVR algorithm as shown in table II.

Fig 5 Circuit under Test benchmark Circuit - c17


Result test set compaction


Number Of Inputs And Patterns Applied

Fault Model Considered And Number of Faults Introduced

Number Of Test vectors for 100% Fault coverage

% Reduction Of Number Of Test Vector




5 input

32 test vectors

Stuck at fault model

20 faults




1 bit adder

3 input

8 test vectors

Stuck at fault model

22 faults





Genetic algorithm has been used for test pattern generation which has reduced the test set size without any degradation in the fault coverage. This algorithm has been applied to various combinational benchmark circuits like c17, 1 bit full adder and found that the percentage reduction in the test pattern was found to be approximately 50% when compared to the other test compaction technique like TDVR. The compacted test set has achieved the same fault coverage as the original test set. This reduction in the test vectors in turn reduces the test application time and hence the test power. In future GA can be applied for the sequential circuits in order to obtain the reduced test set.