This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Historically, venoms have been differentiated from poisons by the route of entry into a recipient organism: venoms are injected or introduced into a wound produced by the delivering organism, and poisons are injected (accidentally or intentionally) in the recipient organism. The term venom typically is applied to simple or complex secretions (usually containing multiple toxins) produced in a specialized gland which causes deleterious effects and/or death when injected into a recipient organism (e.g. 4). A toxin, on the other hand, is a biologically produced unique molecular entity, which can damage or kill an organism through its action on specific tissues (e.g. 5). Unfortunately, even in the scientific literature, one still occasionally encounters the description of venom as a "neurotoxin" or a "hemotoxin", particularly in reference to the venoms of front-fanged snakes (families Atractaspididae, Elapidae and Viperidae). The term "hemotoxin" is really a misnomer, because there are no venom toxins, which specifically target the blood. Though the dominant pharmacological effect of venom may be described superficially as "neurotoxic" or "tissue-damaging", no snake venom described to date contains only a single molecular or pharmacologically-active component. Toxinologists, herpetologists and other should therefore refrain from using such obfuscating language, because these errors become propagated by the lay press and could lead to inappropriate management of human envenomation by a hemotoxic snake. For example, in the United States, the general public considers rattlesnakes to produce "hemotoxic" venom; however, venom of the Mojave rattlesnake, Crotalusscutulatus (as well as several other species), often contains Mojave toxin, a potent homolog of the presynaptic neurotoxin crotoxin, and bites by this species can rapidly become life-threatening.
Antivenin was first prepared in 1894,and was the result of several investigations carried out simultaneously in different parts of the world. The successfully scientific immunization of an animal by repeated injections of animal venom was first reported by Fornara in 1877. He successfully protected a dog after several inoculations of small amounts of toad skin secretions. Later in Michigan, Sewall described a similar experiment in 1887.He protected pigeons against the equivalent of six lethal doses (LD) of rattlesnake venom after treating them with gradually increasing doses of venom. Subsequently, in France in 1892, Kaufmann reproduced these experiments on a dog inoculated with Viperaaspisvenom. It was demonstrated that animals can be protected against a toxic substance after inoculation of several sub-lethal doses of the same substance. The development of diphtheria and tetanus antitoxins by Behring and Kitasato in early 1890s was based upon a similar principle. But they showed that protection could be transmitted from one animal to another that had never before received the substance. The way to antitoxin or antivenin therapy was opened.
Discovery of antivenin was claimed the same day (February 10, 1984) by the team of Hisalixand that of Calmette (Brygoo in 1985; Calmette in 1884; Phisalix and Bertrand in 1884). Calmette in Paris, Fraser in Edinburgh, São Paulo in Brazil and McFarland in Philadelphia began preparation of antivenin against various species of venomous snakes. The main difficulties in the preparation and use of antivenin appeared soon after the beginning of serotherapy. Some are Calmette (Brygoo in 1985; Calmette in 1884; Phisalix and Bertrand in 1884). Calmette in Paris, Fraser in Edinburgh, São Paulo in Brazil and McFarland in Philadelphiastill not completely resolved:
Inactivation of venom before inoculation of the animal
Purification of antivenin
Evaluation of antivenin potency
Adverse reactions to antivenin
Today, in addition to the epidemiological, biochemical, and immunological considerations, one must add the commercial and economic points of view. Serotherapy is currently the only specific treatment of snake envenomation; however these accidents often occur in regions where antivenin, if available, is difficult to administer and expensive in relation to the way of life.
Snake bites were considered emergency threats for human life. Perhaps, venomous bites show as double teeth marks than ordinary bites. Snake venom is one of the most amazing and unique adoptions of snakes in animal planet. Venoms are mainly toxic modified saliva consisting of a complex mixture of chemicals called enzymes found in snake poisons throughout the world known to man. Snakes with neurotoxic venom include cobras, mambas, sea snakes, kraits and coral snakes and snakes with hemotoxic venom include rattlesnakes, copper-head and cottonmouths (Blanchard, 2001).
Worldwide about 30,000 to 40,000 people die annually of snake bites. Of these, about 25,000 people die in India, mostly in rural areas, about 10,000 people in United States and rest of in other countries. Under the Wild Life Protection Act, 1972, all snakes are protected (with the venomous once being at the top of the list of the protected species) and there was a ban on the selling of snake skins since 1976. Snake venom is badly needed to produce antivenom required to treat potentially fatal snakebites (Dravidamaniet. al., 2008).
[2.1] Antivenom Beginnings
In the 1890, Albert Calmette, who was a protégée of Louis Pasteur, found himself in a troubling situation. He was living in what is now Vietnam and his village near Saigonhad just suffered a serious flood. The water wasn't the worst of it. The flooding pushed monocled cobras into village where they bit at least 40 people and killed four. After this experience Calmette began work on a cure that would be similar to the then new science of vaccinations. Calmette eventually caught snakes, "milked" them of their venom and injected it into horses to create antibodies. Drawing the horses' blood for a serum, he was able to create antivenom that worked on humans. Snake venoms are among the best pharmacologically characterized natural toxins, chiefly because of their deleterious effects on humans. While these complex, protein rich mixtures have been extensively separated and fractionated for over half a century, our understanding of the evolution of venomous snakes has relied on comparative morphology (Vidal, 2002; Jackson, 2003) and molecular genetics (Kraus and Brown, 1998; Slowinski and Lawson, 2002). Unfortunately, genetic and morphological analyses alone cannot provide much evidence regarding evolution of venom components, and offer no insight into the evolutionary and biological utility of snake venoms. This study has two primary functions:
(1) To review and summarize the existing body of toxicological literature regarding the enzyme activities of snake venoms, and
(2) To encourage applied researchers to consider the natural functions and selective forces that have shaped snake venoms over evolutionary time.
These findings should be of particular interest to applied toxicological researchers who deal with these intriguing mixtures exclusively at the pharmacological level. While the primary biological utility of snake venoms is not well understood from an evolutionary perspective, this has not prevented naturalists from speculating about venoms biological utilities for over one hundred years (Mitchell, 1868; Shortt 1870). Most of these hypotheses regarding the function of snake venoms have focused on three adaptive advantages: prey capture, defense, and digestion. Whether a result of difficulties associated with experimental design or the obvious connection between a snake bite and death, very few scientific researchers have attempted to investigate the evolutionary utility of snake venoms (Thomas and Pough, 1979; Daltry et al., 1997; Andrade and Abe, 1999; McCue, 2002).
[2.2] Evolution of venomous snakes
Venomous snakes are a polyphyletic group of Colubroidea that includes all family members of Elapidae and Viperidae, and some of the members of the families Atractaspidae and Colubridae. Because of the difficulties in definitively identifying which snakes belonging to the families Atractaspidae and Colubridae are venomous (Vidal, 2002), and because venomous atractaspid and colubrid snakes are so poorly represented in the toxicological literature (Rodriguez-Robles, 1994; Weinstein and Kardong, 1994), these groups are not further addressed here.
This review focuses on the venom characteristics of the three most widely researched venomous subfamilies: the Elapinae (Family Elapidae) and the Viperinae and Crotalinae (Family Viperidae). These lineages are believed to have originated in the Miocene, but remain sparsely represented in the fossil record (Nilson and Andren, 1997; Rage, 1997). Like their fossil record, scientific discussions concerning the evolutionary and selective forces responsible for shaping their venoms are scant. Several studies have shown that the composition of snake venom is genetically controlled, and thus subject to evolution via natural selection like any heritable trait (Jimenez-Porras, 1964; Airdet al., 1989). Therefore, it should be possible to make evolutionary inferences based on the current patterns of venom phenotypes. This paper examines patterns in venom protein content, toxicity, and yield, and compares specific enzyme activities among over one hundred venomous snake species from three subfamilies. The purpose of this investigation is to uncover patterns in the chemical activities and composition of venoms. Such patterns can then be used to address the long-standing hypotheses about the biological function and evolutionary radiation of snake venoms.
[2.3] Comparing chemical activities of snake venoms
Snake venoms are complex mixtures composed chiefly of varied enzymatic and non-enzymatic toxins. Although a single snake venom sample may contain dozens of enzymatic toxins, these enzymes are generally grouped into a few classes by Toxinologists. The most commonly quantified classes of snake venom enzymes include phospholipase A2 (PLA2), phosphodiesterase, phosphomonoesterase, L-amino acid oxidase, specific endopeptidases, and nonspecific endopeptidases (Iwanaga and Suzuki, 1979). The specific activities of each of these can be measured using different substrates. Most comparative studies of snake venoms do not quantify enzyme concentration directly, but rather, measure venoms' specific activities on various molecular substrates. Because the enzyme composition of particular venom fractions can vary widely (Boumrahet. al., 1993; Komori et. al., 1995), measurements of specific activity of whole venoms are employed in these analyses. Furthermore, many venom components are believed to work synergistically with each other, or with components of prey tissue (Teng et al.,1984; Tan and Armugam, 1990), and thus fractionated venoms offer a less complete collection of potential synergies. As a result, this study references only toxicological studies that use either fresh venoms or freshly reconstituted whole venoms. Reconstituted venoms are most commonly used in laboratory research and are well known to be pharmacologically equivalent to fresh venoms (Minton and Weinstein, 1986; Hayes et. al., 1995). Several studies have demonstrated that venom from conspecific snakes can vary ontogenetically (Bonilla et. al., 1973; Meier and Freyvogel, 1980; Meier, 1986; Andrade and Abe, 1999), seasonally (Gregory-Dwyer et al., 1986), interdemically (Aird, 1985; Minton and Weinstein, 1986; Wilkinson et. al., 1991; Rodrigues et. al., 1998), and with physical condition (Klauber, 1997). Because of the numerous sources of qualitative and quantitative variation among venoms, this study draws from a broad range of primary sources to explore patterns in toxicological pharmacological properties of a diverse collection of snake venoms.
[2.4] Mechanism of Action on Human
Cobra snake venom cardiotoxins and bee venom melittin share a number of pharmacological properties in intact tissues including hemolysis, cytolysis, and contractures of muscle, membrane depolarization and activation of tissue phospholipase C and, to a far lesser extent, an arachidonic acid-associated phospholipase A2. The toxins have also been demonstrated to open the Ca2+ release channel (ryanodine receptor) and alter the activity of the Ca(2+)+Mg(2+)-ATPase in isolated sarcoplasmic reticulum preparations derived from cardiac or skeletal muscle. However, a relationship of these actions in isolated organelles to contracture induction has not yet been established. The toxins also bind to and, in some cases, alter the function of a number of other proteins in disrupted tissues. The most difficult tasks in understanding the mechanism of action of these toxins have been dissociating the primary from secondary effects and distinguishing between effects that only occur in disrupted tissues and those that occur in intact tissue.
[2.5] Symptoms of Venom on Humans
Necrosis (Muscles damaged)
Internal Organ Breakdown
Blood Cells (i.e. WBC and RBC) destroyed [Hemolysis]
Bleeding disorder [Disrupt blood clotting]
Drooping of eyelids [Ptosis]
Double Vision [Diplopia]
[2.7] Treatment with Antivenom
Antivenom acts to neutralize the poisonous venom of the cobra and causes the venom to be released from the receptor site. Thus, the receptor sites that were previously blocked by venom are now free to interact with the acetylcholine molecule and normal respiration resumes. The spent antivenom and the neutralized venom are then excreted from the body.
Venom composition (and its corresponding toxicity) can vary among cobras from the same species and even from the same litter it can also vary for an individual cobra during its lifetimeand all of this makes each cobra bite truly unique. In order to insure correct treatment, antibodies specific to each form of cobra venom must be developed. The correct antibodies may be synthesized by injecting horses with a small amount of cobra venom, and then collecting the antibodies produced by the horses' immune systems. Of course, large samples of cobra venom must be collected for this process, and many snake farms around the world make significant amounts of money by harvesting the deadly snake toxin.
[2.8] Phospholipase A2 (PLA2)
PLA2 enzymes are esterolytic enzymes which hydrolyseglycerophospholipids at the snâˆ’2 position of the glycerol backbone releasing lysophospholipids and fatty acids. Snake venoms are rich sources of PLA2 enzymes. Several hundred snake venom contains PLA2 enzymes which have been collected, purified and characterized. Amino acid sequences of over 280determined as PLA2 enzymes (Adatabaseis available at http://sdmc.lit.org.sg/Templar/DB/snaketoxin_PLA2/index.html). They are made of approx. 13kDa proteins and contain 116-124 amino acid residues and six or seven di-sulphide bonds. They are rarely glycosylated. So far, three-dimensional structures of more than 30 PLA2 enzymes have been determined (for a comprehensive list). The structural data indicate that snake venom PLA2 enzymes share strong structural similarity to mammalian pancreatic as well as secretory PLA2 enzymes. They have a core of three Î±-helices, a distinctive backbone loop that binds catalytically important calcium ions, and a Î²-wing that consists of a single loop of antiparallel Î²-sheet. The C-terminal segment forms a semicircular 'banister', particularly in viperid and crotalid PLA2 enzymes, around the Ca2+-binding loop. In addition, they have a similar catalytic function in hydrolyzing phospholipids at the snâˆ’2 position. However, in contrast with mammalian PLA2 enzymes, many snake venom PLA2 enzymes are toxic and induce a wide spectrum of pharmacological effects. These include neurotoxic, cardiotoxic, myotoxic, hemolytic, convulsive, anticoagulant, antiplatelet, oedema-inducing and tissue-damaging effects. Thus PLA2 enzymes also form a family of snake venom toxins, which share a common structural fold but exhibit multiple functions. These factors make the structure-function relationships and the mechanisms of action intriguing, and pose exciting challenges to scientists.
Some snake venom PLA2 enzymes inhibit blood coagulation. Boffa and colleagues studied the anticoagulant properties of a number of PLA2 enzymes and classified them into strongly, weakly and non-anticoagulant enzymes. Strongly anticoagulant PLA2 enzymes inhibit blood coagulation at concentrations below 2Î¼g/ml. weakly anticoagulant PLA2 enzymes show effects between 3 to 10Î¼g/ml. A number of venom PLA2 enzymes do not prolong the clotting times significantly even at 15Î¼g/ml. Thus the anticoagulant activity of different PLA2 enzymes varies significantly. Evans et. al., purified three anticoagulant proteins (CM-I, CM-II and CM-IV) from Najanigricollis (black-necked spitting cobra) venom and showed their identity with PLA2 enzymes. CM-IV shows at least 100-fold more potent anticoagulant activity than CM-I and CM-II. On the basis of their anticoagulant properties, they were classified as strongly (CM-IV) and weakly (CM-I, CMII) anticoagulant PLA2 enzymes respectively. Since phospholipids play a crucial role in the formation of several coagulation complexes, intuitively one might anticipate that the destruction of phospholipid surface would be the primary mechanism to account for anticoagulant effects of PLA2 enzymes. However, strongly anticoagulant PLA2 enzymes also affect blood coagulation by mechanisms that are independent of phospholipid hydrolysis.
To explain the functional specificity and mechanism of induction of various pharmacological effects, the target model was proposed. Accordingly, the susceptibility of a tissue to a particular PLA2 enzyme is due to the presence of specific 'target sites' on the surface of target cells or tissues. These target sites are recognized by specific 'pharmacological sites' on the PLA2 molecule that are complementary to 'target sites' in terms of charges, hydrophobicity and van der Waals contact surfaces. Proteins (or glycoproteins) could act as specific target sites for PLA2 enzymes. The affinity between PLA2 and its target protein is in the low nanomolar range, whereas the binding between PLA2 and phospholipids is in the high micromolar range. Such a four to six orders of magnitude difference in affinity between the protein-protein interaction and the protein-phospholipid interaction explains why the interaction of PLA2 and its target protein governs the pharmacological specificity.
The target proteins such as membrane-bound receptors/acceptors are identified through studies using radiolabelled PLA2 enzymes and specific binding studies, as well as photo affinity labeling techniques. Anticoagulant PLA2 enzymes, on the other hand, target one or more soluble proteins or their complexes in the coagulation cascade. Furthermore, the enzymes may interact with the active, but not the zymogen, form of the coagulation factor. Therefore different strategies have been used to identify the soluble target protein in order to understand the mechanism of anticoagulant effects of PLA2 enzymes.
A2 cleavage site
[2.9] PLA2 as Target
PLA2 disrupts biological membranes and can lead to permanent damage or even lysis (splitting or breaking of cells). The body secretes its own versions of PLA2 (pancreatic [I] or non-pancreatic [II]) that have totally different functions. Human PLA2 aid in: digestive enzymes, cell contraction, cell proliferation, destruction of pathogens (Disease producing organisms) Venom PLA2 is classified as group III and has a similar structure to I & II only when bound to a receptor. The various physiological effects of PLA2 are determined by the type of receptor to which it binds. Receptors include N- receptors (neurological- III) and M-receptors (muscular- bind only I & II). It may act pre- or post-synaptically at the neuromuscular junction by binding to acetylcholine receptors (N-receptor). The binding of PLA2 to acetylcholine receptors block the binding of acetylcholine, which causes flaccid (limp) paralysis. The binding of the receptor affects in a variety of ways in different muscles. This suggests that there are differences in affinity of the binding in different muscle types. Respiratory failure often accompanies the paralysis because there is likely a high affinity for PLA2 in phrenic nerve-diaphragm endplate receptors.
A large number of different types of plasma membrane receptors, including many that act via heterotrimeric GTP-binding proteins or tyrosine kinases, have been demonstrated to induce activation of PLA2. This enzyme cleaves the sn-2 fatty acyl bond of phospholipids, producing a free fatty acid and a lysophospholipid. AA is the precursor of a large family of compounds known as the eicosanoids (based on their derivation from the precursor), which includes cyclooxygenase-derived prostaglandins and lipoxygenasederived leukotrienes. The eicosanoids possess a wide spectrum of biological PHOSPHOLIPASE A2 REGULATION 177
[2.10] Ursolic Acid as Inhibitor
Ursolic acid is a pentacyclictriterpenoid, present in many fruit plants such as apples, bilberries, cranberries, elder flower, peppermint, lavender, oregano, thyme, hawthorn, prunes and i.e. why it is used in cosmetics. Ithas medicinally action, both topically and internally. Ursolic acid can serve as a starting material for synthesis of more potent bioactive derivatives, such as anti-tumor agents. It is capable of inhibiting various types of cancer cells by inhibiting the STAT3 activation pathway and human fibrosarcoma cells by reducing the expression of matrix metalloproteinase-9 by acting through the glucocorticoid receptor. It may also decrease proliferation of cancer cells and induce apoptosis.Ursolic acid and its native compositions are used in pharmacology (one can find more than 1,500 sources in scientific literature) predominantly as a component of preventive medicine for various diseases including lymphocytic leukemia, neoplastic tumors, and as a modifier of protein synthesis.
Ursolic acid was found to be a weak aromataseinhibitor (IC50 = 32 Î¼m).
Other names for ursolic acid include 3-beta-3-hydroxy-urs-12-ene-28-oic-acid, 3-Î²-hydroxy-urs-12-en-28-oic acid, urson, prunol, and malol.
Figure-2.1:Structure of Ursolic Acid
[2.11] Molecular Docking
Drug discovery is often evolved from serendipitous and fortuitous findings. For example penicillin discovery by alexander Fleming in 1928 brought revolution in drug discovery which contributed tremendously for longevity of human beings. Such discovery may be achieved through random systematic experimentation where combinatorial libraries are synthesized and screened potent activities. Such an approach is time consuming, labor intensive and high cost effective. A more lucrative solution to overcome this problem is to rationally drugs design using computer aided tools such as molecular modeling, molecular docking simulation and virtual screening for the purpose of identifying promising candidates prior to synthesis.
Docking and are design are the measure computational approach towards understanding and affecting receptor-ligand interaction. Molecular docking is a key tool in structural molecular biology and computer assisted drug design. Now a day, the goal of the molecular docking in modern drug design and discovery to help in understanding the drug -receptor interaction. It has been show in literature that these computational techniques can strongly support and help in the design of novel, more potent inhibitors by revealing the mechanism of drug receptor interaction. The computational concepts and offered the following strategy for flexible docking and design (a) Monte Carlo/molecular dynamics docking (b) in-site combinatorial search (c) ligand build-up and (d) site mapping and fragment assembly(Rosenfeld et al.,1994). Significant advances in computer based ligand-receptor docking techniques and related rational drug design tools helped significantly to generate lead compound for target proteins (Lybrand,1995).
Autodock predicts the conformations of a small and flexible ligand to a macromolecular target of known structure with the help of C program. It combines simulated annealing for conformation searching with a rapid grid-based method of energy evaluation (Goodsellet al.,1996). In general, there are two key component of molecular docking(Leach and Gillet,2003):(a)Accurate pose prediction or binding confirmation of the ligand inside the binding site of the target protein and (b) Accurate binding free energy prediction, which later is use to rank order the docking poses. The docking algorithm usually carries out the first part of the docking (predicting binding confirmation) and the scoring function associated with the docking program carries out the second part i.e. binding free energy calculations.
Docking algorithm usually perform pose prediction. Identifying molecular features which are responsible for molecular recognition or pose prediction are very complex and often difficult to understand and even more so, when simulated on a computer (Kitchen et. al., November 2004).
After the pose prediction by the docking algorithm, the immediate step in the docking process is activity prediction, which is also termed as scoring. Docking score is achieved by the scoring functions associated with the particular docking software. Scoring functions are design to calculate the biological activity by estimating the interaction between the compound and protein target.
[2.11.1] Docking Algorithm
Depending on the flexibility of protein of ligand, docking algorithms can be divided in 3 types:
Rigid docking: Protein and ligand are considered to be rigid.
Semi-flexible docking protein is fixed and ligand is flexible.
Flexible docking: Both protein and ligand are flexible.
Based on the principle of confirmation generation, the search methods are categorized into Stochastic, Systematic and Deterministic method.
The two most popular stochastic methods are genetic algorithm (GA) and Monte Carlo algorithm (MC) (Clark and Ajay 1995; Jones et. al., 1995; Oshiroet. al., 1995). The Monte Carlo method is capable of generating ensembles of confirmations statistically consistent at room temperature. While generating the pool of random conformations, with each iteration of the process, either the internal confirmation of the ligand (by rotating around a bond) is changed or the entire ligand is subjected to the rotation or translation within the active site of the protein. An energy function evaluates the newly formed confirmation and except the confirmation only if the energy is lower than the one derived from the previous step or if, it is higher, is within the ranged defined by Boltzmann factor (Miteva, 2008). Ligand fit Monte Carlo algorithm. GA Starts with population of random ligand confirmations with random orientation and at random translations. In genetic algorithm (GA), each chromosome in a population encodes for one ligand confirmation along with its orientation in its binding sites of the proteins. Then, in the next step, scoring functions evaluate the fitness of each individual in a population and less fit individuals are being killed (or not passed on onto the next generation). Pairs or survived individuals are mated leading to children with the new chromosomes derived from the parents by mutations and recombination. (Chromosome in this text refers to position, orientation, and confirmation of the ligand). GA differs from the Monte Carlo methods by performing a numbers of runs and selecting the structure with highest scores. GOLD (Verdonket. al., 2003), Autodock (Morris et. al., 1999) and DARVIN (Taylor et. al., November 2000) are the some of the few docking programs with rely on genetic algorithms.
With the availability of more and more information on protein and nucleic acid molecular docking is considered as a lead method for drug design and discovery. The computer added Drug Design (CADD) has facilitated the discovery of new lead compounds and three dimensional structural optimization. The main direction in CADD are based on the availability of the experimentally determined 3Dimentiona Structure of protein molecules. The methods of structure based drug design are used wherever the 3D structure of protein molecule is known. In other indirect methods of CADD based on ligand based drug design system is used. The structural information does obtain can be invaluable in the generation of novel molecules or in the redesign of existing molecules which do not have optimal activity.
Therefore computational approaches like 'Dock' small molecules into the binding cavity of macromolecular target and 'Score' there potential complementarity to binding sites are widely used in potent hit identification lead optimization.
[2.12] Quantitative Structure Activity Relationships
QSAR make possible to predict the activities of a given compound as a function of its molecular substituent. QSAR has great potential for modeling and designing novel compounds with robust properties. QSAR has its origin in the field of toxicology whereby Cross in 1863 proposed a relationship which existed between the toxicity of primary aliphatic alcohol with their water solubility (Cross, 1963), shortly after, Richet (Richet, 1893), Meyer (Molecular Networks GmbH Computer chemie., 2008), and Overton (Overton, 1901) separately discovered a linear correlation between lipophilicity (e.g. oil-water partition coefficient) and biological effects (e.g. Narcotic effects an toxicity). In 1956, Taft proposed an approach for separating polar, steric and resonance effect of substituents in aliphatic compounds (Taft, 1956) these contributions by Hammet and Taft formed the mechanist basis for the development of QSAR by other investigators like Hensch and Fujita (Hensch and Fujita, 1964). An excellent account on the development of QSAR is presented by Hensch and Leo (Hensch and Leo, 1995).
Classical QSAR often correlate biological activities of drug with physiochemical properties which encode certain structural features (Hensch and Leo, 1995; Ramsden, 1994; Kudinyi,1993; Kubinyi, 1995; Ven de waterbeemt, 1996). In addition to lipophilicity, polarizability, electronic properties and steric parameters are also frequently used to describe the different size of substituents. Cramer and Milne were the first to attempt to compare molecules by aligning them in space and by mapping their molecular field to a dimensional grid (kim, 2007). In order to correlate the field values with the biological activities, svantewold in 1986 developed the used of partial least squares analysis instead of principal component analysis. So many different approaches to QSAR have been developed over the years. The rapid increase in three dimensional structure information (3D) of bioorganic molecules, coupled with the development of fast method for 3D structure alignment (e.g. active analogue approach), has led to the development of 3D structural descriptors and associate 3D QSAR methods. The most popular 3D QSAR method is comparative molecular field analysis (CoMFA) (Cramer et. al., 1988) and comparative molecular similarity analysis (CoMSIA) (Klebeet. al., 1994). The CoMFA method involves generation of a common three-dimensional lattice around a set of molecules and calculation of the steric and electrostatic interaction energies at the lattice points. The interaction energies are numerically very when a lactic point is very close to an atom and special care needs to be taken in order to avoid problems arising because of this. The CoMSIA method avoids these problems by using similarities function represented as Gaussian. This information around the molecules is converted into numerical data using the partial least square (PLS) method that reduce the dimensionality of data by generating components. However, a major disadvantage is that PLS attempts to fit a linear curve among all the points in the data set. Further, the PLS method does not offer scope for improvement in result. It has been observed from several reports that the predictive availability of PLS method is rather poor due to fitting of a linear curve between the available point. In the case of the CoMSIA method, molecular similarities is evaluated and used instead of molecular field, followed by PLS analysis.
Recent trends in 2D/3D QSAR have focused on the development of producers that allow selection of optimal variables from the available pool of descriptors of chemical structures i.e., ones that are most meaningful and statistically significant in terms of correlation with biological activity.
This is accomplished by combining one of the stochastic search methods such as simulated annealing, genetic algorithms, or evolutionary algorithms with the correlation methods such as MLR, PLSR, or artificial neural networks (Sutter et. al., 1995; Rogers and Hopfinger, 1994; Kubinyi, 1994; Luke, 1994; so and Karplus, 1996). Since the effectiveness and convergence of these algorithms are greatly affected by the choice of fitting function, several such functions have been used to improve their performance (Kubinyi, 1994).
Since these techniques involve optimization of many parameters, the speed of the resulting analysis is relatively slow as compares to single regression methods. Variable selection methods have also been adopted for optimal region selected in 3D QSAR method and shown to provide improve QSAR models as compared to the original CoMFA technique, For example, GOLPE (Baroni et. al. 1993) was developed using chemometric principles and q2-GRS was developed on the basis of independent analysis of small areas (or regions) of near-molecular space to address the issue of optional region selection in CoMFA (Cho and Tropsha, 1995). These considerations provide an impetus for the development of fast, generally nonlinear, variable selection methods for performing molecular field analysis.
[2.13] Development of 3D QSAR Model
[2.13.1] kNN-MFA Methodology for building QSAR models
The kNN technique is conceptually simple approach to pattern recognition problems. In this method, an unknown pattern is classified according to the majority of the class memberships of its k nearest neighbors in training set. The nearness is measured by an appropriate distance metric (e.g. a molecular similarity measure, calculated using field interactions of molecular structures). The standard kNN method is implemented simply as follows: (1) calculated distances between an unknown object (u) and all the objects in the training set; (2) select k objects from the training set most similar to object u, according to the calculated distances; (3) classify object u with the group to which a majority of the k object belong. An optional k value is selected by the optimization through the classification of a test set of sample or by the leave-one out cross-validation. The variable and optional k values are chosen using different variable selection methods as described below.
[2.13.2] kNN-MFA with Stepwise (SW) Variable Selection
This method employs a stepwise variable selection procedure combined with kNN to optimized (1) the number of nearest neighbor (k) and (2) the selection of variable from the original pool as described a trial model with a single independent variable and adds independent variables, one step at a time, examining the fit of the model at step (using weighted kNN cross-validation procedure). The method continues until there is no more significant variable remaining outside the model.
[2.13.3] kNN-MFA with Simulated Annealing
Simulated annealing (SA) is the simulated of a physical process, 'annealing', who involves heated the system to a high temperature and then gradually cooling it down to a present temperature (e.g., room temperature). During this process, the system samples possible configurations distributed according to the Boltzmann distribution so that at equilibrium, low energy states are the most populated. The SA kNN-MFA method employs the kNN classification principle combined with the SA kNN-MFA method employs the kNN classification principle combined with the SA variable selection procedure. For each predefined number of variabels (Vn) it seeks to optimization tool; (i) the number of nearest neighbor (k) used to estimate the activity of each molecule and (ii) the selection of variables from the original pool of all molecular descriptors that are used to calculated similarities between molecules (i.e., distances in Vn-dimentional descriptor space). The implementation of SA kNN-MFA reported by Zheng and Tropsha (Zheng and Tropsa, 2000) can be summarized as follows. (1) Generate a trial solution to the underlying optimization problem; i.e., a kNN-MFA model is built based on a random selection of descriptors. (2) Calculated the value of the fitness function, which characterizes the quality of the trial solution to the underlying problem, i.e., the q2 value for a kNN-MFA model. (3) Perturb the trial solution to obtain a new solution; i.e., change a fraction of the current trial solution descriptors to other randomly selected descriptors and build a new kNN-MFA model for the new trial solution. (4) calculated the value of the fitness function (q2 new) for the new trial solution. (5) Apply the optimization criteria: if q2 curre q2 new solution is accepted and used to replace the current trial solution; if q2 curr> q2 new, the solution is accepted only if the Metoplis criteria are satisfied; i.e.
Rnd< e-(q2curr - q2new)/T
Where rnd is a random number uniform distribution between 0 and 1 T is a parameter analogous to the temperature in the Boltzmann distribution. (6) Steps 3-5 are repeated until the termination condition is satisfied. The temperature-lowering scheme and the termination condition used in this work have been adapted from Sun et al. 14. Thus, when a new solution is accepted or when a preset number of successive steps of generating trial solution (20 step) do not lead to a better result, the temperature is lowered by 10% (the default initial temperature is 1000K). The calculations are terminated, when either the current temperature of simulations reaches 10-6 K or the ratio between the current temperature and the temperature corresponding to the best solution found equals 10-6.
[2.13.4] kNN-MFA with Genetic Algorithm
Genetic algorithm (GA) first described by Holland (Holland,1975) mimic natural evolution and selection. In biological systems, genetic information that determines the individuality of an organism is stored in chromosomes. Chromosomes are replicated and passed onto the next generation with selection criteria depending on fitness. Genetic information can however be altered through genetic operations such as mutation and crossover. In Gas, each "chromosome" is a set of genes, which constitutes a candidate solution to the discrimination problem. A population of "chromosomes" is used. The passage of each "chromosome" to the next generation is determined by its relative fitness, i.e., the closeness of its properties to those desired. Random combinations and /or changes of the transmitted "chromosomes" produce variation in the next generation of "offspring". Better the fitness (correspondence with desired properties), greater is the chance of that chromosome being selected for transmission. Optimal or near optimal solutions are obtained through evolution over many generation. There are four major component of GA: chromosome generation, fitness assessment, selection, and mutation. This method employs a stochastic variable selection procedure, combined with kNN, to optimize (i) the number of nearest neighbors (k) and (ii) the selection of variable from the original pool as described in simulated annealing. The implementation of GA based kNN-MFA involved the following steps:
(1) Generate the initial population of chromosomes (candidate solutions by randomly selecting genes (descriptors) from the pool of available genes.
(2) Calculate pairwise Euclidean distances for all pair of molecules with respect to each chromosome.
(3) Calculate the fitness of each chromosome using a weighted kNN cross-validation procedure.
(4) Select chromosomes for mating pool by roulette wheel selection.
(5) Apply uniform crossover and mutation operation on the mating pool chromosomes to create a new population of offspring.
(6) Calculate fitness of each offspring using a weighted kNN cross-validation procedure.
(7) Replace the least fit chromosomes in an initial population with the best offspring.
(8) Repeat steps 2-7 until the convergence criteria or the maximum number of generations is reached.
[2.13.5] Linear Regression Methods
Regression methods are used to build a QSAR model in the form of a mathematical equation. This equation explains variation of one or more dependent variables (usually activity) in terms of independent variables (descriptors). The QSAR model can then be used to predict activities for new molecules, for screening a large set of molecules whose activities are not known.
[2.13.6] Multiple Regression Methods
Multiple regressions are the standard method for multivariate data analysis. It is also called as ordinary least squares regression (OLS). This method of regression estimates the values of the regression coefficients by applying least squares curve fitting method. For getting reliable results, dataset having typically 5 times as many data points (molecules) as independent variable (descriptors) is required. The regression equation takes the form
Where Y is the dependent, the 'b's are regression coefficients for corresponding 'x's (independent variable), 'c' is a regression constant or intercept.
[2.13.7] Stepwise Multiple Regressions (SMR)
It is an approach to select a subset of variables, when the numbers of independent variables (descriptors) are much more than the number of data points (molecules). SMR is a way of computing OLS regression in stages. It is also a procedure to examine the impact of each variable to the model step by step. Each variable is added to the equation and a new regression is performed. The variable that cannot contribute much to the variance explained would not be added. As a result, SMR generates a single multiple regression equation.
[2.13.8] Principal Component Regression (PCR) method
Multiple Linear Regression (MLR) is unstable when there are correlated X variables. This gives a goodexample of why we need to examine the structure within data sets, rather than using them blindly. Principal components analysis provides a method for finding structure in such data sets. Put simply, it rotates the data into a new set of axes such that the first few axes reflect most of the variations within the data. By plotting the data on these axes, we can spot major underlying structures automatically. The value of each point, when rotated to a given axis, is called the principal component value. Principal components analysis selects a new set of axes for the data. These are selected in decreasing order of variance within the data. They are also perpendicular to each other. Hence the principal components are uncorrelated. Some components may be constant, but these will be among the last selected. The problem noted with MLR was that correlated variables cause instability. So, how about calculating principal component, throwing away the ones which only appear to contribute noise (or constants), and using MLR on these?
This process gives the modeling method known as Principal Components Regression. Rather than forming a single model, as with MLR,a model can be formed using 1,2,-- components and a decision can be made as to how many component are optimal. If the original variables contained collinearity, then some of the components will contribute only noise. So long as these are dropped, the model can be we can guarantee that our models will be stable.
[2.13.9] Partial Least Squares Regression (PLSR) method:-
Partial least squares regression is an extension of the multiple linear regression models. In its simplest form, a linear specifies the (linear) relationship between a dependent (response) variable Y, and a set of predictor variables, X's, so that
In this equation b0 is the regression coefficient for the intercept and the bi values are the regression coefficient (For variable 1 through p) computed from the data.
So example, one could estimate (i.e., predict) a person's weight as a function of the person's height and gender. A linear regression could be used to estimate the respective regression coefficients from a sample of data measuring height, weight and observing the subjects' gender. For many data analysis problems, estimates of the linear relationships between variables are adequate to describe the observed data and to make reasonable predictions for new observations.
The multiple linear regression models have been extended in a number of ways to address more sophisticated data analysis problems. It serves as the basis for a number of multivariate method such as discriminant analysis (the prediction of group membership from the levels of continuous predictor variables), principal components regression (the prediction of responses on the dependent variables from factors underlying the levels of the predictor variable) and canonical correlation (the prediction of factors underlying responses on the dependent variables from factors underlying the levels of the predictor variable). These multivariate methods all have two important properties in common. These methods impose restrictions such that (i) factors underlying the Y and X variable are extracted from the Y'Y and X'X matrices respectively and never from cross- product matrices involving both the Y and X variable and (ii) the number of prediction functions can never exceed the minimum of the number of Y variable and X variable. Partial least square regression extends multiple linear regression without imposing the restrictions employed by discriminant analysis, principal components regression and canonical correlation. In partial least squares regression, prediction function are represented by factors extracted from the Y'XX'Y matrix. The number of such prediction functions that can be extracted typically will exceed the maximum of the number of Y and X variables.
In short, partial least squares regression is probably the least restrictive of various multivariate extensions of the multiple linear regression models. This flexibility allows it's to be used in situations where there are fewer observations than predictor variables. Furthermore, partial least squares regression can be used as an exploratory analysis tool to select suitable predictor variable and to identify outliers before classical linear regression. Partial least squares regression has been used in various disciplines such as chemistry, economics, medicine, physiology, and pharmaceuticals science where predictive linear modeling, especially with a large number of predictors, is necessary. Especially in chemo matrix, partially least squares regression has become a standard tool for modeling linear relation between multivariate measurements.
[2.13.10] Neural Network
Neural network involve designing a system to learn from data in a manner emulating the learning pattern of brain (Eberhart and Dobbins), 1990; Van Ooyen and Nienhuis, 1992; Rich and Knightl, 1991; Hassoum, 1995). Neural networks typically used when there are a large number of observations and when the problem is not understood well enough to write a procedural program or expert system. An artificial neural network consist of a number of 'Neurons' or 'hidden units' that receive data from the outside, process the data and output a signal. A neuron is essentially a regression equation with a nonlinear output. When more than one of these neurons is used, nonlinear models can be fitted. These networks have been shown to work well for modeling a number of different problems, including QSAR. Neural network are known for their ability to model a wide set of function without knowing the model a prior. The back propagation network receive a set of input (descriptors of a molecules), which are multiplied by each neurons weight. These products are summed for each neurons and a nonlinear transfer function is applied. The bias has the effect of shifting the transfer function to the left or right. The transform sums are than multiplied by the output weights where they are summed a final time, transformed, and interpreted. Since a back-propagation network is a supervised method, the desired output (activity of a molecule) must be known for each input vector so an error (the difference between the desire output and the networks predicted output) can be calculated. These errors is propagated backwards through the network (thus the name), adjusting the weights so that the next time the network sees the same input pattern, it will come closer to the desire output. The patterns are shown many time until the network either learn the relation are determines that there is known.
[2.14] Validation of the 3D-QSAR model
According to Tropshaet al., the recommendation of statistical performance of the constructed QSAR model was considered to have a high predictive power only if the was >0.5 for the internal set and if the was>0.6 for the test set (Tropshaet al.,2003).
Generally, a QSAR model validated to test the internal stability and predictive ability by the internal, external validation and randomization test procedure:-
[2.14.1] Internal Validation or Cross-Validation using weighted k-nearest Neighbor
Internal validation was carried out using the standard leave -one -out procedure was used and can be summarized as follows:
(1) A molecule in the training set was eliminated, and its biological activity was predicted as the weighted average activity of the k most similar molecules (eq.1). The similarities were evaluated as the inverse of Euclidean distances between molecules (eq.2) using only the subset of descriptors corresponding to the current trial solution.
(2) Step 1 was repeated until every molecule in the training set has been eliminated and its activity predicted once.
(3) The cross-validated () value was calculated eq. 3, where yi and y^I are the actual predicted activities of the molecule, respectively, and y mean is the average m activity of all molecule in the training set. Both summations are over all molecules in the training set. Since the calculation of the pairwise molecular similarities, and hence the predictions were based upon the current trial solution, the obtained is indicative of the predictive power of the current kNN- MFA model.
(4) Step 1-3 was repeated for k) 2, 3,4, etc. Formally, the upper limit of k is the total number of molecules in the data set. However, the best value has been empirically found to lie between 1 and 5. The k value that led to the highest q2 value was chosen for the current kNN-MAF model.
[2.14.2] External Validation
The following procedure was applied for external validation.
(1)Predict the biological activity of a molecule in the test set as the weighted average activity of the k most similar molecules in the training set (eq.1). The similarities were evaluated as the inverse of Euclidean distances between molecules (eq.2) as calculated using the descriptors determined by current model.
(2)Step 1 was repeated for every molecule in the test set.
(3)The predicted r2(pred_r2) value was calculated using eq.4, where yi and y^I are the actual and predicted activities of the molecule in the test set, respectively, and y mean is the average activity of all molecules in the training set. Both summations are over all molecules in the test set. The pred_r2 value is indicative of the predictive power of the current kNN-MFA model for external test set.
[2.14.3] Randomization Test
To evaluate the statistical significance of the QSAR model for an actual data set, we have employed a one-tail hypothesis testing(Zheng and Tropsha, 2000; Gilbert and Saunders,1976).The robustness of the QSAR models for experimental training sets was examined by comparing these models to those derived for random data sets. Random sets were generated by rearranging biological activities of the training set molecules. The statistical model was derived using various randomly rearranged activities (random sets) with the selected descriptor and the corresponding q2 were calculated. The significance of the models hence obtained was derived based on calculated Z score (Zheng and Tropsha, 2000; Gilbert and Saunders, 1976). A Z score value is calculated by the following formula:
Where h is the q2 value calculated for the experimental dataset, µ the average q2, and Ïƒ is its standard deviation calculated for various iterations using models build by different random dataset. The probability (Î±) of significance of randomization test is derived by comparing Z score value with Z score critical value as reported in reference (Shenet al., 2003), if Z score value is less than 4.0; otherwise it is calculated by the formula as given in the literature. For example, a Z score value greater than 3.10 indicates that there is a probability (Î±) of less than 0.001 that the QSAR model constructed for the real dataset is random. The randomization test suggests that all the developed models have a probability of less than 1% that the model is generated by chance.
Therefore the tremendous advancement in the computational field has solved the problem to major extent. So an attempt has been made to use some computational methodology such as molecular docking and QSAR studies to suggest new potent molecules to be used against antivenom with all positive attributes. We report here the development three-dimensional QSAR model using a new method (kNN-MFA) that adopts a k-nearest neighbor principle for generating relationships of molecular fields with the experimentally reported activity.