# Parameter Optimization Of Quantum Well Nanostructure Biology Essay

Published:

Carrier transport properties of nanodevices are controlled by biasing field, frequency of the applied field and system parameters like lattice temperature, quantum well width, spacer width and carrier concentration. All these parameters are related in such a way that it is very difficult to predict optimized system parameters for desired electrical characteristics using the traditional mathematical optimization techniques. Evolutionary algorithms are stochastic methods that mimic the natural biological evolution or the social behaviour of species. Such algorithms have been used for large-scale optimization problems in many applications. In this work, two evolutionary algorithms, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are applied to get optimized system parameters for AlxGa1-xAs/GaAs quantum well nanostructure, which may be utilized during fabrication for better nanodevices. The results obtained are compared in terms of convergence speed, processing time and quality of results. The PSO based algorithm is found to convergence faster than GA for almost same quality of results. And also the processing time is faster in case of PSO based algorithm for the present application of parameter optimization for nanodevice modeling.

### Professional

#### Essay Writers

Get your grade

or your money back

using our Essay Writing Service!

Keywords: Optimization, GA, PSO, quantum well, scattering mechanism, mobility, frequency response.

## 1. Introduction

Recent advances in crystal growth techniques like fine line lithography, metal organic chemical vapour deposition (MOCVD) and molecular beam epitaxy (MBE) have made possible the fabrication of low dimensional semiconductor structures such as quantum well, quantum wires and quantum dots [1-5]. A quantum well (QW) is formed when a thin layer of lower bandgap semiconductor is sandwiched between two layers of higher band-gap semiconductor [6-7]. In the quantum well structure, electrical and optical properties of the semiconductor are totally different from those in the bulk material due to quantum effect [8-9]. Due to modulation doping in QW structures, carriers are separated from ionized impurity thereby increasing the mobility of carrier due to reduced ionized impurity scattering. The carrier concentration in QW is high and the coulomb scattering is also reduced with sufficient thickness of spacer layer [10]. Theoretical studies of the electrical characteristics are clearly vital to understand the physics of these devices.

Electrical characteristics of the carrier in a QW are controlled by the system parameters like lattice temperature, well width, spacer width, carrier concentration, external dc biasing field and the frequency of applied ac field. All these parameters are related in such a way that it is very difficult to predict optimized values of parameter for desired electrical characteristics [11-12] using the traditional mathematical optimization techniques. Evolutionary algorithms that mimic the natural biological evolution or the social behaviour of species have been developed for fast and robust solution to complex optimization problems. Genetic algorithm (GA) is a computationally simple but powerful algorithm developed based on the principle of the 'survival of the fittest' and the natural process of evolution through reproduction [13]. Theoretically and empirically it is proven that GA provides robust search in complex spaces. As a result, GA is now finding more widespread applications in sociological, scientific and technological circles [14]. Despite its simplicity, GA may require long processing time to get a near-optimum solution.

PSO is an evolutionary computational intelligence-based technique, which was inspired by the social behaviour of bird flocking and fish schooling [15]. PSO algorithm shares many common points with GA. Both the algorithms start with a group of a randomly generated population, have fitness values to evaluate the population, searches for optima by updating generations and none of them guarantee success [15]. Each solution in PSO is a 'bird' and is referred as a 'particle', which is analogous to a chromosome in GA. As opposed to GAs, PSO does not create new birds from parents. PSO utilizes a population of particles that fly through the problem hyperspace with given velocities. The velocities of the individual particles are stochastically adjusted according to the historical best position for the particle itself and the neighbourhood best position at each iteration. Both the particle best and the neighbourhood best are derived according to a user defined fitness function. The movement of each particle naturally evolves to an optimal or near-optimal solution. The performance of PSO is not largely affected by the size and nonlinearity of the problem, and can converge to the optimal solution in many problems where most analytical methods fail to converge. It can, therefore, be effectively applied to different optimization problems. Moreover, PSO has some advantages over other similar optimization techniques such as GA, namely the following [16].

### Comprehensive

#### Writing Services

Plagiarism-free

Always on Time

Marked to Standard

PSO is easier to implement and there are fewer parameters to adjust.

In PSO, every particle remembers its own previous best value as well as the neighbourhood best; therefore, it has a more effective memory capability than the GA.

PSO is more efficient in maintaining the diversity of the swarm [17], since all the particles use the information related to the most successful particle in order to improve themselves, whereas in GA, the worse solutions are discarded and only the good ones are saved; therefore, in GA the population evolves around a subset of the best individuals.

In the present work, both PSO and GA based optimization techniques are employed to determine the optimized system parameters for AlxGa1-xAs/GaAs quantum well nanostructure for nanodevice applications. The parameters to be optimized include lattice temperature, channel length, carrier concentration and spacer width to get the maximized values of mobility and cut-off frequency. Performance comparison of the two algorithms is then presented in terms of convergence speed, processing time and quality of results.

## 2. Analytical model of fitness function

A square QW of AlxGa1-xAs/GaAs of infinite barrier height is considered. Reduced ionized impurity scattering and improved carrier concentration in the QW establish a strong electron-electron interaction which favors a heated drifted Fermi-Dirac distribution function for the carriers characterized by an electron temperature Te, and a drifted crystal momentum pd. In the presence of an electric field applied parallel to the heterojunction interface, the carrier distribution function can be expressed as;

(1)

where, is the Fermi-Dirac distribution function for the carriers, Ä§ is Planck's constant divided by 2Ð¿, is the two-dimensional wave vector of the carriers with energy E, m* is the electronic effective mass and Î³ is the angle between the applied electric field and the two dimensional wave vector .

An ac electric field of magnitude F1 with the angular frequency Ï‰ superimposed on a moderate dc bias field F0 is assumed to act parallel to the heterojunction interface and thus the overall field is given by;

(2)

As the electron temperature and the drift momentum depend on the field and the scattering processes, they will also have similar alternating components, generally differing in phase.

(3)

(4)

Where, T0 and p0 are the steady state parts, T1r and p1r are real and T1i and p1i are imaginary parts of Te, and pd respectively. The energy and momentum balance equations obeyed by the carrier can be given as;

(5)

and

(6)

Where and, represents, respectively, the average momentum and energy loss due to scatterings and depicts the average energy of a carrier with charge e. In the present model the effects of delta doping is included in the energy and momentum loss calculations to give more accurate results. We insert (3) and (4) in (5) and (6), retain terms up to the linear in alternating components and equate the steady parts and the coefficients of sinÏ‰t and cosÏ‰t on the two sides of the resulting equations following the procedure adopted in reference 6. For a given electric field Fo, we solve for po and To. The dc mobility Î¼dc and ac mobility Î¼ac are then expressed as:

(7)

(8)

The phase lag f, the resulting alternating current lags behind the applied field is expressed as;

(9)

Equations 7, 8 and 9 are used as the fitness functions of ac mobility, dc mobility and phase angle, respectively. Detailed derivations of the fitness functions are available in the Ref. 7 and not deliberately included in the present analysis for brevity of this paper.

## 3. GA Based Optimization

In GA, a solution to a given optimization problem is represented in the form of a string, called 'chromosome', consisting of a set of elements, called 'genes', that hold a set of values for the optimization variables [18]. For appropriate binary representation four parameters (carrier concentration (N2D), quatum well width (LZ),lattice temperature (TL) and spacer width (LS) ) are coded into a single finite length string of twenty three bits as shown in Fig 1.

Fig. 1. A chromosome with 23 bits.

To start with, a random population of chromosomes is generated. The fitness of each chromosome is determined by evaluating it against a fitness function and an average is computed which is considered as the starting average fitness value. Strings/chromosomes are then selected from the generation according to their fitness value. Strings, whose fitness value is less then the average fitness value, are rejected and will not pass to the next generation. Subsequent generations are developed by selecting pairs of parent strings from present genetic pool to produce offspring strings in the next generation, which is called "crossover" operation. For crossover operation an integer position (t) along the string is selected randomly between 1 and the P-1, where P is the maximum string length. Two new strings are created by swapping all binary bits between positions (t + 1) and P inclusively. As an example, two consecutive strings Sk and Sk+1 are shown in Fig. 2. A random number is chosen between 1 and 22 (23 - 1), as P = 23 here. Then the result of cross over which produces two new strings S'k and S'k+1 are indicated in Fig. 2.

### This Essay is

#### a Student's Work

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our workFig. 2. Cross over operation between two consecutive strings.

Gradually, generation-by-generation the algorithm will progress towards the optimum solution. When the improvement in the average fitness value falls in the range of 0.00 - 0.05% for at least ten consecutive generations, program will be terminated. The mutation operator plays a secondary role in the simple GA and mutation rates are simply small in natural population. It is also a rare process that resembles a sudden change to an offspring. This can be done by randomly selecting one chromosome from the population and then arbitrarily changing some of its information. The benefit of mutation is that it randomly introduces new genetic material to the evolutionary process thereby avoiding stagnation around local minima [19]. The four parameters that affect the performance of GA are population size, number of generations, crossover rate, and mutation rate. Larger population size and large number of generations increase the likelihood of obtaining a global optimum solution, but substantially increase processing time.

## 4. PSO Based Optimization

The PSO algorithm begins by initializing a group of random particles (solutions) and then searches for optima by updating generations. The fitness values of all the particles are evaluated by the fitness function to be optimized. An iterative process to improve these candidate solutions is set in motion. With the progress of 'iteration', which is synonymous to generation in case of GA, every particle updates its position, velocity and moves through the problem or solution space. In iteration, position of each particle is updated by following two "best" values. The first one is the best solution or fitness value that the particular particle has achieved so far and is called "pbest" and the second one is the best solution obtained by any particle in the entire population and is known as the global best or "gbest"[20]. After finding pbest and gbest the particle updates its velocity and positions using the following two equations:

(10)

(11)

Where is the particle velocity, is the current position of the particle, w is called the inertia factor, rand () is a random number between (0,1), c1 and c2 are learning factors.

The following procedure can be used for implementing the PSO algorithm [21].

Initialize the swarm by assigning a random position in the problem hyperspace to each particle.

Evaluate the fitness function for each particle.

For each individual particle, compare the particle's fitness value with its pbest. If the current value is better than the pbest value, then set this value as the pbest and the current particle's position, xi, as pi.

Identify the particle that has the best fitness value. The value of its fitness function is identified as gbest and its position as pg.

Update the velocities and positions of all the particles using equations (10) and (11).

Repeat steps 2-5 until a stopping criterion is met (e.g., maximum number of iterations or a sufficiently good fitness value).

## 5. Results & Discussions

The GA and PSO algorithms have been coded using MATLAB7.5 and all experiments were conducted on a 2.20GHz AMD ATHLON Processor with 2GB RAM Desktop PC. The parameters used for the algorithms are given in Table 1 and they are taken based on the consideration presented in Refs. 18 and 20.

Parameters

GA

PSO

Population Size

100

100

Max Generation/ Iteration

Varies

Varies

Selection Type

Random

NA

Crossover Rate

80%

NA

Mutation Rate

9%

NA

wman ,wmin

NA

0.9,0.1

c1,c2

NA

1.49

Table 1. Parameters used for GA and PSO algorithm.

The material parameters for the Al0.3Ga0.7As/GaAs QW are taken from Ref. 7. The range of QW parameters taken for optimization are based on theoretical assumptions and physical phenomenon and are as follows:

2-D carrier concentration (N2D) is 6Â´1015/m2 to 10Â´1015/m2

Quantum well width (LZ) is 8nm to 12nm.

Spacer width (LS) is 10nm to 50nm.

Lattice temperature (TL) is 77 K to 300K.

Since the parameter optimization is to be carried out during fabrication of nanodevices (i.e real time application), an algorithm with high average performance is the best option [22]. Therefore, GA and PSO algorithms are compared based on the Mean Best Fitness measure (MBF)/average fitness value obtained over 300 runs for ac mobility and 400 runs for dc mobility and cut-off frequency.

Fig. 3. Plot of average dc mobility with iterations (PSO) /generations(GA).

Fig. 4. Plot of ac mobility with iterations (PSO) /generations(GA).

Using Eqs. 7 and 8 as the fitness functions, the GA and PSO algorithms are applied to get the optimized values of dc and ac mobility. The simulation of the search space is depicted in Figs. 3 and 4. It is found that the PSO based algorithm converges faster than GA based algorithm. For dc mobility optimization, PSO took 230 iterations and 81.23 seconds to converge whereas GA took 350 generations and 124.41 seconds. For ac mobility optimization, it was found that 165 iterations and 62.03 seconds were required by PSO and whereas it was 292 generations and 98.11 seconds for GA.

GA and PSO algorithms are applied using equation 9 as the fitness function to get the optimized value of cut-off frequency. The simulation of the search space is depicted in Fig. 5.

Fig. 5. Plot of cut-off frequency with iterations (PSO) /generations(GA).

As in the case of mobility mobilization, it is found that the PSO based algorithm converges faster than GA based algorithm. The results obtained from using GA and PSO are summarized in Table 2.

Scheme

Convergence

Time

Processing

Time (in AMPT)

Parameter value

MBF

N2D

Lz

LS

TL

dc mobility optimization

GA

350 Generations

124.4

1.821

10

12

34

80

PSO

230 Iterations

81.23

1.818

9.8

11.8

35.6

78.8

ac mobility optimization

GA

292 Generations

98.11

0.818

10

11

31

279.5

PSO

165 Iterations

62.03

0.820

9.9

10.9

31

278

Cut-off frequency optimization

GA

305 Generations

103.08

353

6

8

23

296

PSO

262 Iterations

93.03

350

6.3

8.5

21

293.8

Table 2. Performance comparison of GA and PSO.AMPT: Processing time for a single run of analytical model with given parameters.

The performance of the algorithms was compared using three criteria: (1) convergence speed; (2) processing time to reach the optimum value and (3) the quality of results. The processing time, and not the number of iteration/generation cycles, was used to measure the speed of each algorithm, because the number of generations is different from one algorithm to another. To make processing time comparison more relevant and processing system independent, processing time of two proposed schemes are compared in terms of analytical model processing time (AMPT).

Both the algorithms perform well at finding optimal solution. Therefore, in terms of quality of solution there seems to be no difference to distinguish GA and PSO. However, when the number of generations or iterations is taken into account, there are differences in the number taken to obtain the optimal solution. The PSO based algorithm is shown to convergence faster than the GA based algorithm. And also the processing time is less in case of PSO based algorithm for the present application of parameter optimization for nanodevice modeling.

## 6. Conclusion

Application of soft-computing tool, especially PSO, for parameter optimization in quantum well structure is new and quite useful to understand optimum combinations of parameters to get better quantum well nanostructure. Under similar software and hardware environment, PSO and GA is applied for parameter optimization of AlxGa1-xAs/GaAs QW nanostructure and performance of the two schemes are compared in terms of convergence speed, processing time and quality of results. The particle swarm optimization based algorithm is found to convergence faster than GA for almost same quality of results. Another reason why PSO based algorithm is attractive is that there are few parameters to adjust making it much simpler to implement. Through this computational study, mobility (ac and dc) and cut-off frequency values are optimized with respect to parameters of quantum well nanostructure, which will provide valuable information for the technologists involved in the fabrication of QW nanodevices. Successful implementation of such types of soft computing tools for parameter optimization of QW revels that those schemes can be successfully implemented for parameter optimization of other nanostructures.