Different Methods Of Protein Structure Prediction Biology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This chapter explores the related works of different methods of protein structure prediction by ab initio modeling, it also provides a brief survey of the applications that use harmony search algorithm. {page 36, 1st pgh}

The components of ab initio modeling are: conformational search algorithm, energy

function , and a selection method. This chapter classifies the various algorithms of ab initio method based on these components. {page 36, 1st pgh} (Punctuation)

A successful ab initio method for protein structure prediction depends on a powerful conformational search method to find the minimum energy for a given energy function. (Not clear) . {page 36, 2nd t pgh}

This fact has opened the gate for the non-deterministic search techniques -like simulated annealing, genetic algorithm, Monte Carlo, tabu search, and ant colony - to be the most successful techniques to solve this problem. {page 36, 2nd t pgh}

A main technical difficulty of Monte Carlo simulations is that the energy landscape of

protein conformational space is quite rough since it contains many energy barriers, the thing that may trap the MC simulation procedures. Different conformational search methods have been developed to overcome these problems -this will be discussed with details in this section. The section illustrates the key ideas of conformational search methods used in various ab initio protein structure prediction methods. ****** Until now, there is no single powerful search method that outperforms other methods in all cases; nevertheless, there can be such a method that can only outperform other methods in some cases. {page 36, 2nd t pgh} ******

Molecular Dynamics is a powerful tool to investigate equilibrium and transport properties of many-body systems. {page 37, 3.1.1}

One of the major shortcomings of this method is its long simulation time ; the incremental time scale is usually in the order of femtoseconds while the fastest folding time of a small protein is in the millisecond range in nature. {page 37, 3.1.1}

MD simulations are often carried out for structure refinement since the conformational

changes are assumed to be small - especially when a low resolution model is available. {page 37, 3.1.1} (punctuation)

The first application of Molecular Dynamics to the protein is the study of (McCammon et al., 1977), these researchers have investigated the dynamics of a folded global protein with two limitations in their model; the approximate nature of the energy function and the neglect of solvent. {page 37, 3.1.1}

A research of Wen et al. (2004) presents an observed folding pathway for a 23-residue protein called bba1; the results come with enhancing the sampling efficiency in molecular dynamics of ab initio folding simulations. {page 37, 3.1.1}

Another remarkable approach is the work of Liwo et al. (2005) who have implemented a MD simulation with the physics-based force field UNRES. Their results have proved that their approach can carry out simulations of protein folding in real time, which makes it possible to explore folding pathways and derive the distribution of folding times.

{page 37, 3.1.1}

Later, Ding et al. ( 2008) has developed an all-atom Discrete Molecular Dynamics protein model which can perform folding simulations of six small proteins with distinct native structures. This model indicates the importance of environment-dependent hydrogen bond interactions in modeling protein folding. {page 37, 3.1.1}

Recently, Voelz et al. (2010) have simulated several folding trajectories of a 39-residue protein called NTL9 which has a folding time of _1.5 millisecond. They have generated ensembles of trajectories out to _40 microsecond using distributed molecular dynamics simulations in implicit solvent on GPU processors {page 37-38, 3.1.1}

Simulated Annealing (SA) by Kirkpatrick et al. (1983) is a stochastic probabilistic metaheuristic algorithm for the global optimization problem of applied mathematics which is applied widely and effectively in several problems. {page 38, 3.1.2}

It is as general as it can be applied on any optimization problem. {page 38, 3.1.2}

The simulated annealing uses Metropolis Monte Carlo algorithm to generate a series

of conformational states following the canonical Boltzmann energy distribution for a given temperature ;it starts by a high temperature followed by consequent simulations that slowly decrease the temperature {page 38, 3.1.2}

Monte Carlo with minimization (MCM) (Li and Scheraga, 1987) has been successfully applied to the conformational search of ROSETTA's high-resolution energy function to overcome the multiple-minima problem. {page 38, 3.1.2}

In MCM, one performs MC moves between local energy minima to compare it with the previously accepted local minimum to update the current conformation of each perturbed protein structure. {page 38, 3.1.2}

Many names have been used to refer to these techniques like multi-canonical ensemble and entropic ensemble (Lee et al., 2009). {page 38, 3.1.2}

The basic idea in these techniques is to accelerate the transition among states separated by energy barriers by modifying the transition probability so that the final

energy distribution of sampling becomes more or less flat rather than bell-shaped. {page 38-39, 3.1.2}

Parallel hyperbolic sampling (PHS) (Zhang, Kihara and Skolnick, 2004) further *********extended the REM method by dynamically deforming energy using an inverse hyperbolic sine function which more quickly explores the low-energy barriers in the protein. {page 39, 1st pgh}

Recently, Barth et al. (2009) have used a membrane protein structure method that applied a stochastic Monte Carlo Minimization protocol in the refinement of coarse-grained models. {page 39, 1st pgh}

Unger and Moult (1993b) have suggested the use of genetic algorithm for protein folding simulations; they have proved the Schemata theorem in the context of protein structure observing that Genetic Algorithm gives more attention to favorable local structures while unfavorable local structures will be rapidly abandoned. {page 39, 3.1.3}

Later , Konig and Dandekar (1999) have improved this method by investigating a new search strategy in combination with the simple genetic algorithm on a two-dimensional lattice model. They have proposed a new search strategy called systematic

crossover ; this strategy prevents the population from becoming too homogeneous. Konig and Dandekar's method proves that their new search strategy, in combination with the simple genetic algorithm, has significantly increased the search effectiveness- compared to the method of Unger and Moult (1993 b).{page 39, 3.1.3}

Torres et al. (2007) have proposed one of the successful genetic algorithms that has some good features like using heuristic secondary structure information to initialize the genetic algorithm with an enhanced 3D spatial representation. They have used hash tables that increase the efficiency of search and operations. {page 39, 3.1.3}

In general, their model is a good predictor - in comparison to the results of CASP 7, but it still needs some effort to improve the quality of the energy function and the spatial representation. {page 39, 3.1.3} (Punctuation)

It is important to highlight that the use of hash tables has introduced an excellent computational technique to model amino acid spatial occupancy because the number

of collisions has been reduced to zero and the insertion, erasing , and search have been very efficient. Recently, Hoque et al. (2009) have presented the ab initio protein structure prediction as a conformational search problem in a low resolution model using the genetic algorithm. These researchers have proved that a nondeterministic approach, such as the genetic algorithm, is relatively promising for conformational search. However, GA often fails to provide reasonable outcome -especially for longer sequences -and that is due to the nature of the complex protein structure prediction problem.

{page 40, 1st pgh}

3.2 Energy Functions

Based on the use of statistics of the existing protein 3D structures, energy functions can be classified into two groups: physics-based energy functions and knowledge-based energy functions. (could be better without the word( use of) majd)

In a physics-based ab initio method, interactions among atoms are based on quantum mechanical theory with only few fundamental parameters ;such as the electron charge and the Planck constant; all atoms should be described by their atom types where only the number of electrons is relevant (Weiner et al., 1984). {page 40, 3.2.1}

Some of the methods that used all-atom physics-based force fields include AMBER (Weiner et al., 1984) (Cornell et al., 1995) (Duan and Kollman, 1998), CHARMM (Brooks et al., 1983) (Neria et al., 1996) (MacKerell et al., 1998), OPLS (Jorgensen and Tirado-Rives, 1988), (Jorgensen et al., 1996) and GROMOS96(van Gunsteren et al., 1996). ( (what do you want to say about these methods????? Majd ) {page 40, 3.2.1}

However, the major difference among them is in the selection of atom types and the interaction parameters. {page 40, 3.2.1}

For protein folding, these classical force fields have been often linked with molecular

dynamics simulations. The work of Duan and Kollman (98) is considered the first landmark in such a MD-based ab initio protein folding. They have simulated the villin headpiece in explicit solvent for four months on parallel supercomputers starting from a fully unfolded extended state. Although the protein folding resolution is ***********not high, the best of their final model is *********** within 4.5 Å to the native state. {page 41, 3.2.1 }

Later, this small protein has been folded to 1.7Å with a total simulation time of 300 ms (Zagrovic et al., 2002) using a worldwide-distributed computer system called [email protected] {page 41, 3.2.1 }

While the all-atom physics-based MD simulations have not been particularly successful in structure prediction, fast search methods (such as Monte Carlo simulations and genetic algorithms) have proved to be promising in structure prediction. {page 41, 3.2.1 }

One example is the project of Liwo and colleagues, Liwo et al.(1999) , Liwo et al.

(2005), and Oldziej et al. (2005) who have developed a physics-based protein structure prediction method which combines the coarse grained potential of UNRES with conformational space annealing method of global optimization. {page 41, 3.2.1 }

This effectively reduces the number of atoms, enabling us to handle large polypeptide chains (> 100 residues). {page 41, 3.2.1 }

The UNRES energy function6 is probably the most accurate ab initio method available; it has been systematically applied to many Critical Assessment of Techniques for Protein Structure Prediction (CASP) targets since 1998. {page 41, 3.2.1 } (punctuation)

A novel hierarchical approach ASTRO-FOLD Klepeis and Floudas( 2003) and Klepeis et al.( 2005) is another example of physics-based modeling approaches. {page 41-42, 3.2.1 }

The Root-Mean-Square Deviation (RMSD) of the predicted model is 4.94 Å over all 102 residues. {page 42, 1st pgh ,3.2.1 }

Some developments of ROSETTA Bradley et al. (2005) and Das et al. ( 2007) have used a physics-based atomic potential Monte Carlo structure refinement after performing the lower solution fragment assembly in the first stage. {page 42, 1st pgh ,3.2.1 }

A novel approach has been proposed by Taylor et al. (2008). This approach generates thousands of models based on an idealized representation of structure given the secondary structure assignments and the physical connection constraints of the secondary structure elements. These researchers have successfully folded a set of five small ba proteins in the range of 100-150 residues length with values of RMSD around 5 Å of the native structure for all models. Recently, a multi-protein blind test has been reported by Shell MS (2009) to predict native protein structures based on an all-atom physics-based force field alone. This approach has introduced for predicting the structures of membrane proteins that have little prior knowledge of native structures. {page 42, 1st pgh ,3.2.1 }

They have been widely used in the last 20 years for protein structure prediction,

(Feng Y 2010). Bowie and Eisenberg (1994) have applied a successful and remarkable method that produces protein models by assembling small fragments taken from the PDB library. Although there method has proved to be successful, it has been limited to small helical proteins only. Simons et al. (1997) have developed another successful algorithm which is based on a similar idea, this algorithm is called ROSETTA. {Page 42, 3.2.2}

Rosetta has showed a good performance for the free modeling targets in CASP experiments and has made the fragment assembly approach popular in the field. Later, Bradley et al. (2005) and Das et al.( 2007) have improved ROSETTA ; these researchers have generated models in a reduced form with conformations specified with heavy backbone and Cb atoms - as a first round, then, in the second round, they have built a set of models by refining low-resolution models from the first round by an all-atom refinement procedure using an all-atom physics-based energy function, including van derWaals interactions and an orientation-dependent hydrogen-bonding potential. {Page 43, 1st pgh, 3.2.2}

After the success of the ROSETTA algorithm, many researchers have developed their own energy functions using the same idea. For example, *********the energy terms of Simfold -applied by Jujitsu et al. (2006) and Profesy applied by Lee et al.( 2004); have included van der Waals interactions, hydrophobic interactions, backbone dihedral angle potentials, backbone hydrogen-bonding potential, pairwise contact energies, and beta-strand pairing. Another successful free modeling approach, namely TASSER, has been developed by Zhang and Skolnick (2004a); this approach uses a knowledge-based energy to construct 3D models. {Page 43, 1st pgh, 3.2.2}

The used energy terms have included information about predicted secondary structure, backbone hydrogen bonds, consensus predicted side chain contacts, a short-range correlation, and hydrophobic interactions. {Page 43, 1st pgh, 3.2.2} (punctuation)

In this model, the researchers have used both threading; to search for possible folds first, and ab initio modeling to reassemble full-length models, and have built the unaligned regions. {Page 43, 1st pgh, 3.2.2}

Chunk-TASSER is new development of TASSER by Zhou and Skolnick (2007); this approach first divides the target sequences into chunks; each chunk contains three successive secondary structure elements (helix and strand). Moreover, Wu et al. (2007) have developed another field of TASSER; namely I-TASSER; this approach uses iterative Monte Carlo simulations to refine TASSER cluster centroids. I-TASSER has built models with correct topology ( 3-5 Å) for seven cases with sequences up to 155 residues long. Recently, a comparative study conducted by Helles (2008) on 18 ab initio prediction algorithms has proved that I-TASSER is the best method in terms of the modeling accuracy and CPU cost per target {Page 43, 1st pgh, 3.2.2}

Another open problem in protein structure prediction is the ability to select the best appropriate models which are closer to the native structure than to the templates used in the construction. {Page 44, 1st pgh, 3.3}

Model Quality Assessment Programs (MQAPs), however, have been developed to perform this task (Fischer, 2006). {Page 44, 1st pgh, 3.3}

This section focuses on the energy-based model selection methods, and it will discuss three methods: {Page 44, 1st pgh, 3.3}

There is another popular method in Model Quality Assessment Programs called consensus based method; this method- also called meta-predictor approach (Wu et al., 2007) - uses the similarity of other models taken from the predictions generated by different algorithms (Wallner and Elofsson,

2007). {Page 44, 1st pgh, 3.3}

To develop an all-atom physics-based energy function, some researchers have used existing salvation potential methods to discriminate the native structure from decoys that are generated by threading on other protein structures. For example, CHARMM and EEF1 have been exploited by Neria et al. (1996) and Lazaridis &Karplus (1999b) respectively and they have found out that the energy of the native state is lower than those of decoys in most cases. Later, Petrey and Honig( 2000) have used CHARMM and a continuum treatment of the solvent, Dominy &Brooks(2002) and Feig &Brooks III (2002) have used CHARMM plus GB solvation, Felts et al.( 2002) have used OPLS plus GB, Lee and Duan( 2004) have used AMBER plus GB, and, finally, Hsieh and Luo (2004) have used AMBER plus Poisson- Boltzmann solvation potential on a number of structure decoy sets (including Skolnick decoy set (Kihara et al., 2001) (Skolnick et al., 2003) and CASP decoys set (Moult et al., 2001)). {Page 44-45, 3.3.1}

All the previously mentioned researchers have obtained similar results, i.e. the native structures have lower energy than decoys in their potentials. {Page 45, 1st pgh, 3.3.1}

Recently, Wroblewska and Skolnick (2007) have proved that the AMBER plus GB potential can only discriminate the native structure from roughly minimized TASSER decoys. Their results have partially explained the inconsistency between the widely-reported decoy discrimination ability of physics-based potentials and the less successful folding results. {Page 45, 1st pgh, 3.3.1}

A pairwise residue-distance -based potential using the statistics of known PDB structures (Sippl, 1990) has been developed, and so it has introduced to many other proposals to different knowledge- based potentials; such as atomic interaction potential, solvation potential, hydrogen bond potential ,and torsion angle potential. {Page 45, 3.3.2}

Later, several atomic potentials with various reference states have been proposed by many researchers -like Lu &Skolnick( 2001) ,Zhou &Zhou ( 2002), Shen &Sali(2006), Wang et al.(2004) , and Tosatto( 2005) -with a claim that native structures can be distinguished from decoy structure. {Page 45, 3.3.2} (NOTE: I'm not sure if these are references or researchers who have proposed the potentials, I treated them as researchers but if they are references just keep the sentences as it was. THE SAME THING APPLIES TO THE NEXT PARAGRAPH .MAJD)

In coarse-grained potentials, each residue is represented either by a single or a few atoms, for example, Ca-based potentials (Melo et al., 2002), Cb-based potentials

(Hendlich et al., 1990), side chain centre-based potentials (Bryant and Lawrence, 1993)

(Kocher et al., 1994) (Thomas and Dill, 1996) (Zhang and Kim, 2000) (Zhang, Liu, Zhou and Zhou, 2004) (Skolnick et al., 1997), side chain and Ca-based potentials (Berrera et al., 2003). Based on the Critical Assessment of Fully Automated Structure Prediction (CAFASP)4-MQAP experiment in 2004, the best-performing energy functions are Victor/FRST (Tosatto, 2005) and MODCHECK (Pettitt et al., 2005); the first one incorporates an all-atom pairwise interaction potential, solvation potential and hydrogen bond potential while the second one includes Cb atom interaction potential and solvation potential (Fischer, 2006). {Page 45-46, 3.3.2} (SEE ABOVE NOTE)

Later in CASP7-MQAP, Wallner and Elofsson( 2007) have used a new model that has come out of great results using Pcons and ProQ based on structure consensus. {Page 46, 3.3.2}

In the third type of MQAPs, the best models are selected on the basis of the compatibility of target sequences to model structures instead of being selected purely on the basis of energy functions. {Page 46, 3.3.3}

The earliest successful example is of Luthy and Bowie( 1992) who have evaluated structures using threading scores. This method has been improved later by Verify3D- by Eisenberg et al.(1997) -using local threading scores in a 21-residue window. Another method has been proposed by Colovos and Yeates(1993) to differentiate between correctly and incorrectly determined regions of protein structures based on characteristic atomic interactions; this method has used a quadratic error function to describe the non-covalently bonded interactions, where near-native structures have fewer errors than other decoys. {Page 46, 3.3.3}

GenThreader (Jones, 1999) is an efficient method that uses neural networks to classify native and non-native structures. {Page 46, 3.3.3}

Another neural- network-based method is called ProQ; it has been developed by Wallner and Elofsson( 2003) to predict the quality of a protein model that extracts structural features. {Page 46, 3.3.3}

The inputs of ProQ include atom and residue contacts, solvent accessible area, protein shape, similarity between predicted and model secondary structure, and structural alignment score between decoys and templates. {Page 46-47, 3.3.3} (punctuation)

Later, a consensus MQAP called ModFold has been developed by McGuffin (2008b); he has combined scores obtained from ProQ, by Wallner and Elofsson(2003), MODCHECK by Pettitt et al.(2005) and ModSSEA by McGuffin( 2007). The researcher has proved that ModFold outperforms the previous individual MQAPs. {Page 47, 3.3.3}

A famous structure clustering method named SPICKER has been developed by Zhang and Skolnick (2004b) ; it is based on the 1,489 representative benchmark proteins each with up to 280,000 structure decoys. {Page48, 3.3.4} (same note MAJD)

The best of the top five models is (has been) ranked as top 1.4% among all decoys. For 78% of the 1,489 proteins, the RMSD difference between the best of the top five models and the most native-like decoy structure is (has been) less than 1 Å. In ROSETTA (Bradley et al., 2005) ******, structures are clustered to select low-resolution models before these models are further refined by all-atom simulations to obtain final models. . {Page48, 3.3.4}

In TASSER and I-TASSER approaches, Zhang, Liu, Zhou &Zhou( 2004), and Wu et al. (2007) have used SPICKER to cluster the decoy models generated by Monte Carlo to generate cluster centroids as final models. In Oldziej approach, Oldziej et al.( 2005) have used structure clustering to select the lowest-energy structures among the clustered structures. {Page48, 3.3.4}

This section provides an overview of different applications of Harmony Search Algorithm; it is divided into three subsections: the first subsection reviews the applications that have used the classical HSA. The second subsection discusses the applications that have adapted classical HSA. The third subsection discusses the applications that have Hybridized HSA with other search or optimization algorithms. {Page48, 3.4}

Many researchers have applied harmony search algorithm in many fields; it has been approved to be successful in many optimization problems - including routing problems, puzzles, web page clustering, water management, and others. {Page48, 3.4.1} (punctuation)

For the problem of timetabling, a harmony search algorithm has been introduced

(Al-Betar et al., 2008); the results have proved that the proposed harmony search is capable of providing a viable solution - compared to some previous methods like tabu search, graph based hyper-heuristic, max-min ant system, random restart local search, hybrid evolutionary approach, and fuzzy multiple heuristic ordering. {Page48, 3.4.1} (punctuation)

Both algorithms have been able to find the global optimum solution but HSA has proved to be faster and less cost than GA. {Page48, 3.4.1}

For the problem of timetabling, a harmony search algorithm has been introduced

(Al-Betar et al., 2008); the results have proved that the proposed harmony search is capable of providing a viable solution -compared to some previous methods like tabu-search, graph based hyper-heuristic, max-min ant system, random restart local search, hybrid evolutionary approach, and fuzzy multiple heuristic ordering. . {Page49, 3.4.1} (punctuation)

The researchers ********(specify which researchers you are referring to .MAJD) have claimed that their scheme have better precision and recall than any other approaches; such as genetics algorithm. {Page49, 3.4.1}

…the analysis of real and random networks has proved that HSA produces good solutions. {Page49, 3.4.1}

The clustering problem, however, has been modeled as an optimization problem by some researchers; namely Forsati, Mahdavi, Kangavari and Safarkhani (2008) and Alia et al. (2009), and HSA has been applied to solve this problem. {Page49, 3.4.1}

The results have proved that HSA can obtain higher quality solutions - in comparison to other state-of-the-art algorithms in the domain of dynamic clustering. {Page49, 3.4.1} (punctuation)

Comparing its results with other previously reported results has showed the effectiveness of this algorithm {Page 50, 3.4.1}

The researchers have claimed in this research that the HS methodology could be used almost anywhere where another heuristic method application has been referenced and with a better degree of effectiveness in many cases. {Page 50, 3.4.1}

The experimental results have proved that HS could find the global optimal solution, {Page 50, 3.4.1}

The results of this study have proved that HSA has obtained better solution than SQP. {Page 51, 3.4.1}

Different names have been given to HSA after modification; such as improved harmony search, adaptive harmony search, or modified harmony search. {Page 51, 3.4.2} (punctuation)

The first research proposed to modify the classical harmony search has been introduced by the inventor of harmony search algorithm -who has also introduced an improved harmony search algorithm for solving water network design (Geem, 2006). {Page 51, 3.4.2}

An adaptive harmony search algorithm that has solved the Photon Density Wave detection problem has been presented by Dong et al.(2008). {Page 51, 3.4.2}

The results have proved that the adaptive HS is not only an effective approach to solve the inverse problem in this field but is, even, more effective than the classical HS. {Page 51, 3.4.2}

first, the modified algorithm generates a number of new harmonies during each iteration step rather than one new harmony - as in the classical HS {Page 52, 3.4.2} (punctuation)

This new selection scheme has been proved to be more efficient than the classical harmony search algorithm - especially for the problems that have more than 25 control variables. {Page 52, 3.4.2} (punctuation)

The simulations have indicated that the proposed algorithm is a powerful search and a controller design optimization tool for synchronization of chaotic systems -compared to the classical HS and the global-best HS. {Page 52, 3.4.2}

By comparing the results obtained by this proposed algorithm with other existing relevant approaches, the improved harmony search algorithm has proved its robustness and efficiency over other reported methods. {Page 52, 3.4.2}

Moreover, two improved harmony search algorithms have been proposed to solve some engineering optimization problems (Jaberipour and Khorram, 2010); the main difference between the algorithm proposed in this study and harmony search algorithm is in the way of adjusting the bandwidth (bw). {Page 52, 3.4.2} (punctuation)

The efficiency of the proposed algorithm has been investigated using a number of steel frameworks that are compared with those of the standard algorithm -as well as those of the other metaheuristic search techniques. {Page 52-53, 3.4.2}

Many researchers have enhanced the harmony search algorithm by combining another algorithm or concept to it; {Page 53, 3.4.3}

Another hybridized harmony search algorithm that combines both harmony search algorithm and sequential quadratic programming (SQP) has been presented by Fesanghary et al.( 2008) to solve the engineering optimization problems with continuous design. {Page 53, 3.4.3}

The empirical study has indicated that the HHSA has obtained good results in terms of both the solution's quality and the number of fitness function evaluations. {Page 53, 3.4.3}

Another successful hybridized algorithm , called heuristic particle swarm ant colony optimization (HPSACO) (Kaveh and Talata- hari, 2009), has combined particle swarm optimizer, ant colony strategy, and harmony search to develop a heuristic optimization method for optimum design of trusses {Page 53-54, 3.4.3} (punctuation)

Experimenting HPSACO algorithm by using some design examples has indicated that it is significantly better than other PSO-based algorithms. {Page 54, 3.4.3}

Simulated annealing has also been incorporated within HSA to decrease the PAR and bw parameters throughout the optimization process (Taherinejad, 2009). {Page 54, 3.4.3}

Another study (Lee and Yoon, 2009), however, has proposed combining neural networks with harmony search to acquire a useful decision-making tool for concrete mix design. {Page 54, 3.4.3}

Furthermore, in the field of classification, an approach of combining harmony search algorithm and Linear Discriminate Analysis (LDA) has been introduced (Moeinzadeh et al., 2009) to perform a pre-processing step before classification. {Page 54-55, 3.4.3}

Experimental results on 16,140 query-document pairs data set, have indicated that Harmony-Tabu algorithm has produced better solutions for document retrieval than the original HSA and tabu search algorithm. {Page 55, 3.4.3}