Pharmacology in the post-genomic era

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

On April 14, 2003 the International Human Genome Consortium announced the successful completion of the Human Genome Project. Together with the development of many automated techniques for rapidly obtaining large quantities of genomic data, this milestone resulted in a major paradigm shift in biological research. The traditional hypothesis-driven research paradigm is giving way to the new paradigm of information- or data-driven hypothesis formulation and research. Also the pharmacology field is undergoing this fundamental transformation and hereby witnesses the development of a number of new disciplines (Chanda and Caldwell, 2003; Choudhuri, 2009; Weinshilboum and Wang, 2004).

2.1 Pharmacogenomics

Pharmacogenomics studies the genetic basis of interindividual and interracial variation in response to therapeutic agents. It is the whole genome application of pharmacogenetics, which examines single gene mutations and their effects on drug response (Ghosh et al., 2010; Gomase et al., 2008; Yan, 2005). Pharmacogenetic effects can lead to differences in pharmacokinetic (PK) parameters, involving the concentration of a drug reaching its target. Hence variations in genes encoding drug-metabolizing enzymes or drug transport molecules can underlie aberrant responses to substrate drugs. But also in the absence of substantial variability in drug concentrations at the target site, drugs can produce highly variable effects. This so-called pharmacodynamic (PD) variability, which involves the drug target itself, tends to be drug- or disease-specific in contrast to pharmacokinetic variability that often extends across many drugs and disease processes (Roden et al., 2006; Wang, 2010).

The conceptual basis for pharmacogenomics can be traced to Archibald Garrod's work in the beginning of the twentieth century. In his book The Inborn Errors of Metabolism, the English physiologist proposed that genetic factors might underlie the interindividual variability in drug response (Guchelaar, 2010; Roden et al., 2006). However, only in the late 50s the first experimental validations of the effect of DNA variations on drug response were reported. These early examples of pharmacogenetics grew out of clinical observations of adverse drug reactions (ADRs) and were each shown to be caused by a specific genetic enzyme variant affecting the drug's metabolism (Guchelaar, 2010; Gurwitz and Motulsky, 2007; Wang, 2010). For instance, in 1957 Kalow described the varied response of patients to succinylcholine, a muscle relaxant given as an adjuvant to general anaesthesia. A deficiency in pseudocholinesterase activity, an inherited abnormality, was proven to result in prolonged muscular paralysis for approximately 0.03 percent of the population (Kalow, 2002; Kirk et al., 2008). In the same year Motulsky described sensitivity of some American soldiers of African descent to the antimalarial drug primaquine. The hemolytic anemia observed in this subgroup of primaquine users could be ascribed to a deficiency of the enzyme glucose-6-phosphate dehydrogenase(G6PD) which is important in red blood cell metabolism (Guchelaar, 2010; Kalow, 2002; Kirk et al., 2008; Wang, 2010). Also the genetic variation in the enzymatic acetylation of the antituberculosis isoniazid belongs to the classical examples of pharmacogenetics. Neurological side-effects of isoniazid in some people have been described as early as 1954 and some years later were proved to be due to genetic variability of N-acetyltransferase (Kalow, 2002; Wang, 2010). In 1959 the term pharmacogenetics was coined by Friedrich Vogel to describe the examination of inherited differences in the response to drugs (Gurwitz and Motulsky, 2007; Kalow, 2002; Kirk et al., 2008) whereupon this novel discipline was covered by Kalow's 1962 book Pharmacogenetics - Heredity and the Response to Drugs (Gurwitz and Motulsky, 2007; Kalow, 2002).

The development of pharmacogenetics occurred in parallel with rapid changes in genomic science, most important the completion of the Human Genome Project. The convergence of both areas of biomedical research resulted in the evolution of pharmacogenetics into pharmacogenomics, being one of the first clinical applications of the postgenomic era. Pharmacogenomics promises personalized medicine rather than the established 'one size fits all' approach to drugs and dosages (Guchelaar, 2010; Weinshilboum and Wang, 2004). The U.S.A. Food and Drug Administration (FDA) has already approved a number of pharmacogenomic tests and pharmacogenomic information is contained in the labels of approximately ten percent of its approved drugs, many of which are anti-tumor agents (; Sanoudou, 2010). The approved tests detect variations in genes encoding enzymes involved in drug metabolism such as cytochrome P450 CYP2C19 and CYP2D6 or UDP-glucuronosyltransferase or in drug targets, for example Vitamin K epoxide reductase complex subunit 1 (VKORC1) which is the drug target for the cumarins such as warfarin (Guchelaar, 2010; Swen et al., 2007). Furthermore in 2005 the FDA approved BiDil (bye-DILL), the first and up till now only race-specific pharmaceutical. BiDil is a drug for the treatment of heart failure and is only approved for use in patients of African ancestry (Wang, 2010).

The ultimate promise of pharmacogenomics is the possibility that knowledge of a patient's DNA sequence might be accompanied with a reduction in 'trial and error treatment' what should lead to more efficient and safer drug therapy (Guchelaar, 2010; Weinshilboum and Wang, 2004). Application of pharmacogenomic knowledge to improve drug safety is one of the most important areas of interest to the pharmaceutical industry. Occasionally, safety problems for an approved drug arise when the drug is being applied to a large patient population, possibly many years after it has been granted market access. If it is impossible to select the patients that may be sensitive to the drug in advance, there is no other solution than taking it off the market. Here, pharmacogenomics may play an important role in selecting patients vulnerable to serious side effects and consequently allowing industry to keep valuable products on the market. Moreover a genome-based test population stratification might decrease the risk of unexpected toxicities for new drugs in development (Raaijmakers et al., 2010; Surendiran, 2008). In addition, knowledge of a patient's DNA makes it possible to replace the current methods of basing dosages on weight and age with basing dosages on that person's genetics. This will not only decrease the likelihood of adverse drug reactions (ADRs) due to overdose, but will also maximize the therapy's value (Gomase et al., 2008). The overall efficacy of a drug can be optimized by proactively defining responders and non-responders and restricting treatment to the responders, herewith overcoming dilution of a drug's effect by a part of the population that fails to respond (Raaijmakers et al., 2010). Since the use of genetic information for patient stratification during clinical trials can reduce sample size, product failure rates, incidence of ADRs and post-marketing withdrawal, pharmacogenomics can help the pharmaceutical industry saving a lot of money. Furthermore by lowering anti-ADR treatments and the number of medications patients must take to find an ineffective therapy also the healthcare systems can take advantage of this cost-effectiveness of the pharmacogenomics approach (Gomase et al., 2008; Ohashi and Tanaka, 2008; Sanoudou,2010). Finally a last benefit of pharmacogenomics worth mentioning is the use of the information that is generated from genome wide scans for the identification of potential new drug targets (Raaijmakers et al., 2010).

Although a lot of progress has been made in pharmacogenomics research, there is still much unknown on how genetic differences impact individual drug responses. Next to genetic variations additional factors such as translational processes to obtain functional proteins are of influence. Also sample generation, the reproducibility in different studies and the complex statistics has become a challenge, as is the establishment of a well structured database with sufficient security. At last, enhancement of public confidence that genomic information will be used only for the benefit of the individual patient and not for purposes of discrimination remains a key issue for the further development of personalized medicine (Ohashi and Tanaka, 2010; Raaijmakers et al., 2010; Weinshilboum and Wang, 2004).

2.2 Toxicogenomics

Next to the development of pharmacogenomics, the availability of genome-scale DNA sequence information has also led to the emergence of a subdiscipline derived from the combination of toxicology and genomics. This new branch of science, termed toxicogenomics, was first introduced in 1999 (Nuwaysir et al.) and aims in studying the complex interactions between the structure and activity of the genome on the one hand and adverse biological effects caused by exogenous agents on the other hand (Gatzidou et al., 2007; Khor et al., 2006).

Toxicogenomic technologies can be applied into two broad and overlapping classes: predictive toxicology, and mechanistic or investigative toxicology. Predictive toxicology refers to the field of toxicology focusing on the identification of potentially toxic compounds. An early and reliable prediction of a drug candidate's induced toxicity remains one of the major challenges in drug development. It is believed that toxicogenomics could contribute largely to the prediction of the toxic liabilities of compounds and hence prevent the animal-, cost-, and time-intensive execution of pre-clinical or even clinical trials with inadequate compounds. In this regard, global gene transcription profiling by means of DNA microarrays is of particular importance. It can be assumed that compounds which induce toxicity through similar mechanisms will elicit characteristic gene expression patterns. By grouping the gene expression profiles of different classes of well described model compounds, a database of molecular signatures or fingerprints related to specific organ toxicities can be generated. Integration of this database with supervised algorithms such as support vector machines (SVMs) has proven to be highly beneficial to predict the toxicity of candidate drugs. In addition to the prediction of toxicity and the conform classification of compounds, toxicogenomics can also be employed to provide valuable insight into the underlying mechanisms of different organ toxicities. The application of toxicogenomics for these so-called mechanistic or investigative purposes can also play an important role in risk assessment of drugs, in particular for those for which the toxicity is not associated with well-established biomarkers (Gatzidou et al., 2007; Khor et al., 2006; Lühe et al., 2005; Suter et al., 2004).

Although toxicogenomics has emerged as a new and exciting technology that could potentially revolutionize drug discovery and development, there are still many caveats and challenges which must be resolved before the incorporation of genomics in toxicology applications can reach its full potential (Khor et al., 2006; Van Hummelen and Sasaki, 2010). Nowadays major efforts aim at the development of public databases to facilitate sharing and use of the flood of data and at mathematical and biological tools to mine these databases to extract biologic knowledge (National Research Council (US) Committee on Applications of Toxicogenomic Technologies to Predictive Toxicology, 2007). The two largest public reference databases for microarrays, ArrayExpress and Gene Expression Omnibus (GEO), are rich resources for toxicogenomics data, but are largely inadequate for toxicology applications due to lack of toxicology-specific controlled vocabulary and chemical indexing. The comparative toxicogenomics database (CTD) is a better reference for toxicology experiments, but does not store raw genomics data (Van Hummelen and Sasaki, 2010). Most of the major pharmaceutical companies have therefore started to build their internal toxicogenomics initiatives which are, unfortunately, not accessible by the public (Khor et al., 2006).

2.3 Chemogenomics

The Human Genome Project has led to the identification of approximately 20,000-25,000 human genes and hence made available many potential new targets for drug intervention (Bredel and Jacoby, 2004; Harris and Stevens, 2006). As a result, the systematic identification of small molecules which interact with these potential targets and modulate their biological function is a key challenge for the 21st century (Jacoby, 2006). Pharmacological screening has long aimed to the selection of novel drug candidates by large scale serendipity and optimizing ligand properties towards a single macromolecular target, but now has moved forward (Maréchal, 2008; Rognan, 2007). The integration of recent progress in high-throughput genomics and chemistry has considerably influenced current target and drug discovery approaches and has given rise to a new research field, known as chemogenomics (Bredel and Jacoby, 2004).

This emerging interdisciplinary approach to drug discovery combines traditional ligand-based methods with biological information across drug target families and lies at the interface of chemistry, structural biology, genetics and bioinformatics. The ultimate goal in chemogenomics is to fully match target and ligand space, or in other words identify molecular recognition between all possible ligands and all possible drug targets with which these molecules interact (Harris and Stevens, 2006; Rognan, 2007; Strömbergsson and Kleywegt, 2009).

Any chemogenomic-based approach makes use of the underlying assumptions that compounds sharing some chemical similarity should also share targets and that targets which exhibit similar biochemical and pharmacological characteristics should have similar ligands. Recognizing and reusing such similarities in early-stage drug discovery can have large benefits in terms of efficiency (Harris and Stevens, 2006; Rognan, 2007). Hence the pool of compounds for screening receptors as drug targets of interest will be composed of all known drugs and ligands of similar receptors, as well as compounds similar to these ligands Also, receptors are no longer viewed as individual entities. Compounds are now profiled against a set of related proteins instead of tested against single targets. At last, large structure-activity databases have been established, which can be mined to derive insights into common properties among ligands linked to common features of the receptors to which they bind. These insights can then be used for rational compilation of screening sets and knowledge-based synthesis of chemical libraries (Klabunde, 2007).

Several advanced in silico methods to predict compound-protein relations have already been developed and are used to yield starting points successfully for drug discovery projects. These virtual screening approaches can be classified as ligand-based, target-based and target-ligand approaches. In ligand-based chemogenomics target families are classified without taking into account similarities of assumed ligand-binding sites. Target profiles for given compounds are being predicted, based on a comparison with known ligands, provided that the targets to be predicted are sufficiently well described by these ligands. A screening procedure using either QSAR (quantitative structure-activity relationships), machine learning or pharmacophore searches is applied for comparison. The target-based chemogenomic approaches compare and classify receptors based on ligand-binding sites by using sequence motifs or 3D structural information. This comparison can then be used to predict the most likely ligands for targets similar to the reference targets used in classification. Whereas identification of similar targets and of similar ligands represents a two-step process for the receptor-based and ligand-based approaches, target-ligand chemogenomic approaches attempt to predict ligands for a target of interest in a single step. For this purpose matrices of biological activity data for a set of compounds profiled against a set of targets are used (Jacoby, 2006; Klabunde, 2007; Rognan; 2007).

Chemogenomics approaches can be used for different purposes in disease research. First, chemogenomic data can allow for the identification of proteins as novel drug targets. Second, chemogenomics approaches are applied to discover, in a high-throughput fashion, new chemical ligand candidates for molecular targets of interest Finally, chemogenomic data can also offer fundamental biological insights. Grouping genes according to their chemogenomic profiles can result in the identification of clusters enriched for certain functional categories as defined by the Gene Ontology (GO) database. If a gene of previously unknown function is found in such a cluster, it is likely that is has a function similar to the other genes in that cluster (Bredel and Jacoby, 2004; Jacoby, 2006; Wuster and Babu, 2008).

Chemogenomic high-throughput screening often generates an enormous number of hits, including a substantial portion of false-positives. Therefore, one of the major challenges for the future is the development of more sophisticated computational tools to mine the large datasets to identify those hits with the greatest potential as leads and to weed out false-positive hits. Chemogenomics-orientated drug discovery programs also face many data management challenges. Efforts have already been made to organize data relevant for chemogenomics in public databases, such as the ChemBank, PubChem and ChEBI databases. Furthermore, the development of standardized representations for small molecules like InChi and SMILES facilitates data integration and comparison (Bredel and Jacoby, 2004; Wuster and Babu, 2008).