This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In developing as well as developed countries, hypertension is a significant public health challenge. The prevalence of hypertension as reported in one study varied around the world, with rural India reported to have the lowest prevalence (3.4% and 6.8% in men and women respectively) and Poland reported to have the highest prevalence (68.9% and 72.5% in men and women respectively) (Kearney, Whelton, Reynolds, Whelton, & He, 2004). In Malaysia, according to the third national health and morbidity survey (NHMS III), the overall prevalence of hypertension in subjects aged 15 years and above is 27.8% (Rampal, Rampal, Azhar, & Rahman, 2008).
Hypertension as one of the chronic non-communicable disease, or NCD, is a medical condition that causes almost everyone affected to be on lifelong medications. The same goes with other NCDs such as autoimmune diseases, heart disease, stroke, cancers, asthma, diabetes, chronic kidney disease, osteoporosis, Alzheimer's disease and many more. As do all diseases that are insidious, slow to develop and of long duration, all of them involve chronic care management.
In terms of economy, hypertension imposes a staggering burden as the health care cost has been swelling worldwide throughout the years. It is a fact that prescribing medication is the most common type of the health care interventions. The impact of drug costs to the overall healthcare costs has always been worrying. In 2010, hypertension was estimated to cost the United States $93.5 billion (Heidenreich et al., 2011). Costs of medications are often mentioned as a significant cost driver in the management of this condition. The costs increase proportionately as the condition worsens. According to one study, out of total costs, medications alone accounted for 53.7% in the prehypertension group, 55.1% in stage 1 hypertension group and 72.4%, in stage 2 hypertension groups (Alefan, Ibrahim, Razak, & Ayub, 2009).
One of the key approaches for curtailing the cost of medication, and thus lessening its impact on total health care costs, has been the generic substitution of the innovator drugs. This approach has been effective. About 9 billion dollars or 11% of total prescription costs on average could be saved through the use of generic drugs (Haas, Phillips, Gerstenberger, & Seger, 2005). At the same time, more than 60% of prescriptions filled in the US are of generic drugs at less than 13% of the cost (Shrank, Cox, Fischer, Mehta, & Choudhry, 2009).
However, as the generic drugs are increasingly manufactured and prescribed, physicians as well as publics started to concern and debate regarding the emerging issues on the quality, safety and efficacy of generics. In order to ensure the quality, safety and efficacy of the marketed generics, every phase of its production must be monitored and regulated by authorized and independent bodies. Since generic drugs are used / prescribed interchangeably with innovator / reference drugs, it is important to establish that the safety and efficacy of generics are at par with the safety and efficacy of their innovator counterparts.
Thus, it is vital to consistently evaluate the generic drugs in term of their pharmaceutical quality and in vivo performance. Evaluation of "interchangeability" between both the generics and reference drug is done by a study of "in vivo equivalence" or as it widely labeled, "bioequivalence" (BE).
Bioequivalence between two drugs containing the same active substance is demonstrated 'if they are pharmaceutically equivalent or pharmaceutical alternatives and their bioavailabilities after administration in the same molar dose lie within acceptable predefined limits. These limits are set to ensure comparable in vivo performance, i.e. similarity in terms of safety and efficacy' (EMEA, 2010) . Bioavailability (BA) is defined as 'the fraction of the dose administered that reaches the systemic circulation as a surrogate of the site of action and the rate at which this process occurs' (Morais & Lobato, 2010). Bioavailability in essence is a measurement of the rate and extent of an active drugs ingredients that become available in the systemic circulation.
The idea of BE and the assumptions and concepts underlie it has been discussed, developed and agreed upon by the pharmaceutical industries and national regulatory authorities worldwide for more than 30Â years. As a result of BE being applied to both new and generic drugs, a lot of high-quality and low-costs generic drugs have been made available widely. Almost all of drugs on the market nowadays, at different manufacturing stages, have undergone bioequivalence study.
It is now recognized that in order to be approved, generic drugs require the demonstration of bioequivalence to the innovator drugs. Although less well known, most innovator products too need bioequivalence testing. At various development stages, new drugs are subjected to pharmacokinetic dose-proportionality studies, as well as interaction studies, all of which apply the bioequivalence theory. Since most of the time, the innovator formulation to be marketed is different from the formulation used in safety and efficacy trials, bioequivalence of the marketed formulation to the clinical trial formulation need to be demonstrated. Thus, many marketed innovator drugs are actually 'generic copies' of the clinical trial formulation(Schall & Endrenyi, 2010).
The investigation of bioequivalence is generally done through a clinical trial in healthy volunteers. If two drugs are bioequivalent, they are thus, therapeutic equivalent. This assumption, i.e. bioequivalence conveys therapeutic equivalence, is the 'fundamental bioequivalence assumption'(Chow & Liu, 2008). The most important question need to be addressed is whether bioequivalence really equals to therapeutic equivalence. This will be answered in the later part in this review. The evaluation of BE is a complex issue, however, and progress has been done in recent years in developing more simple yet effective approaches to the assessment of BE.
History Of Bioeqivalence
Throughout the last four decades, bioavailability and bioequivalence concepts have increasingly been applied to generic drugs and, despite not widely known, to new innovator drugs as well. During this period of time, regulatory requirements and guidelines for the generic drug products approval have been devised by governing authorities. Subsequently, BA and BE have been recognized as the foundations for the innovator and generic drugs authorization worldwide and have been used to cut cost of development of innovator drugs (Midha & McKay, 2009). The scientific community, the pharmaceutical industry and regulatory authorities are continuing their efforts to understand and formulate the more efficient as well as scientifically valid approaches for the evaluation of BE of various dosage formulations.
In the late 1960, due to fear that a generic drug might not be as bioavailable as innovator drug, the concept of BA and BE was brought to attention. The concern was based on clinical observations plus the capability to quantify in biological fluids, minute quantities of drug. Public concern and ongoing discussion about bioequivalence started due to reports about digoxin intoxications. At the time, generic digoxin formulations were increasingly prescribed in the United States, and a change in the manufacturing process of a company in Great Britain led to an unintentional increase in the bioavailability of one brand of digoxin tables (Schulz & Steinijans, 1992).
It became clear that drugs that are pharmaceutically equivalent, that is, drugs that contain the same active ingredients in the same dose, are not necessarily bioequivalent. This started a period of four decades of vigorous research and development in BA and BE. It also initiated the development of the present regulatory requirements for authorization of generic drug products (Chow & Liu, 2008).
Over the last 40Â years, the theory of bioequivalence and methodologies to its evaluation were formulated in various stages. The United States Food and Drug Administration (FDA) started to pay attention to new drugs bioavailability in the early 1970s. It was during this time that Office of Technology Assessment (OTA) formed a panel of drug bioequivalence study in order to understand the relationships between chemical and therapeutic equivalence of drug products. The FDA formulated rules and guidelines for the bioequivalence study based on the suggestions made by this panel (Chow & Liu, 2008).
In the early 1980s, the FDA then became interested in addressing the proper statistical methods for evaluating BE. The FDA had considered several approaches to analysis of data and statistical handling of data which encompassed the power approach (rectification of bioequivalence hypotheses) (Anderson & Hauck, 1983), the confidence interval approach, the 75/75 rule (Schuirmann, 1987), and the Bayesian approach (Rodda & Davis, 1980).
In 1984, the Drug Price Competition and Patent Term Restoration Act of 1984 was sanctioned by United States Congress which gave FDA the responsibility to approve or disapprove generic drug products that passed BA and BE studies. Consequently, the FDA commenced several actions for the review and subsequent approval of any submission of generic drug. From 1984 to 1992, a series of BA/BE guidelines were published by FDA (Chow & Liu, 2008). These guidelines assisted the pharmaceutical industry in conducting BA/BE studies.
In 1986, as more and more generic drugs were marketed, there were concerns about the efficacy and safety of generic drugs that had been licensed under existing approaches for evaluating BE(Chow & Liu, 2008). Therefore, to addressed these concerns, FDA conducted a hearing on BE of oral dosage forms. Subsequently, a task force was formed to scrutinize the existing methodologies and statistical approaches that were applied to assess BE.
Around 1990, the new concept of individual bioequivalence was formulated by two US biostatistician, Anderson and Hauck, which has generated vigorous enquiry and discussion (Anderson & Hauck, 1990). They stated that the customary way of bioequivalence assessment confirmed merely that the bioavailability of two drug products was similar on average. They questioned whether average bioequivalence of test and reference drugs indicate interchangeability of these drugs in individual patients(Anderson & Hauck, 1990).
Subsequently, several statistical methods to individual bioequivalence were published. In 2001, FDA adopted the individual bioequivalence concept in its guidance (FDA, 2001). However, the guidance was responded with skepticism and hesitance. Stakeholders were questioning whether the added criteria really delivered additional significance as compared to the regular and already used concept of average bioequivalence (Hauschke, 2007). More importantly and rather enlightening, was that there has been no available proof of clinical catastrophe with a formulation proven bioequivalent under average bioequivalence concept (Barrett et al., 2000).
This brings back to the critical question mentioned above, which is whether bioequivalence inferred therapeutic equivalence. A literature review done by Gould in response to the problem resulted in a conclusion that there is no evidence of therapeutic misfortune of proven bioequivalent products that was properly manufactured could be found (Gould, 2000). Thus, the fundamental bioequivalence assumption has lived through vigorous critics and is here to stay. It seems that the concept of average bioequivalence works and has served and satisfied many party. Two years later, the individual bioequivalence concept was omitted from a subsequent guidance and was replaced with conventional average bioequivalence (FDA, 2003).
In Malaysia, BE study was first highlighted at the 92nd Meeting of the Drug Control Authority where the committee decided to include Bioequivalence (BE) study as one of the requirements for certain categories of oral immediate release drugs. This was prompted by some therapeutic misfortunes in the past involving digoxin, phenytoin and primadone to name a few, which attest the requirement of BE study (Ministry Of Health, 2000).
In September 1999, The Working Committee for BE Studies was formed. The Committee comprised of representatives from University Malaya (UM), Universiti Sains Malaysia (USM), Universiti Kebangsaan Malaysia (UKM), International Medical University (IMU), National Pharmaceutical Control Bureau (NPCB) and the pharmaceutical industry. The objective was to carry out the task of formulating an action plan for implementing BE studies in Malaysia. In 2000, the committee published the 'Malaysian Guidelines for the Conduct of Bioavailability and Bioequivalence Studies'(Ministry Of Health, 2000).
Since then, remarkable progresses have been made by academia, industry, and regulatory authorities in this area. Currently, BE study of drug products has been generally standardized. This is a result of discussion and agreement reached among several stakeholders at the frequently held international scientific meetings, conferences, and workshops.
Evaluation Of Bioequivalence
The evaluation of BE of different drugs is based on the basic assumption that when administered at the equal molar dose of the therapeutic ingredient under identical experimental settings, two drugs are equivalent when both the rate and extent of absorption of the test drug and of the reference drug do not show a significant difference. Essentially, to ensure equivalence, key pharmacokinetic parameters which reflect the rate and extent of both the test and reference drug must be within a predetermined confidence interval.
The regulatory authorities state that a drug is therapeutically equivalent to the innovator drug if it is pharmaceutically equivalent and bioequivalent. Pharmaceutical equivalents are drugs that have identical active ingredient, dosage form, strength and route of administration. Therapeutically equivalent can be prescribed and used interchangeably. Therefore, BE studies can be considered as substitutes for comparative clinical trials in order to assess the therapeutic equivalence between two drugs. Bioequivalence studies are normally done using the following endpoints:
In vitro endpoint
Pharmacokinetic endpoint is preferred for drugs where its level in an easily accessible biological fluid can be quantified and is correlated with the clinical effect, as in the case for most drugs. If this method is not possible, other endpoints can be used as substitute. For some drugs, several BE studies with different endpoints may be needed.
General Considerations for Bioequivalence Studies
The aim of developing the proper study design and handling the conduct of the study is to obtain the highest quality samples. This is important since any subsequent analysis will be faulty and potentially worthless if the study design is inappropriate. Thus, a great deal of attention should be paid to properly calculate the required sample size to ensure sufficient statistical power is available. It is also important to select suitable inclusion and exclusion criteria and comply to them when enrolling subjects. Satisfactory control of experimental conditions and good clinical practice must be strictly adhered to and acknowledged. The appropriate overall design includes: simple two period crossover, replicate design or parallel design. These must be planned and stated clearly in the study protocol.
The method of bioanalysis must be fully validated. The application of validated methodology guarantees that the appropriate analyte is quantified in order to prove bioequivalence. Usually, the parent drug is the analyte of choice. However, quantifying a metabolite, or the parent and metabolite(s), may be needed in the following cases (Midha, Rawson, & Hubbard, 2004):
(a) when the parent drug is rapidly and extensively metabolized
(b) the metabolite is more highly correlated to therapeutic efficacy than the parent, or
(c) both the parent and metabolite(s) are responsible for therapeutic effect.
Many scientific and regulatory guidelines are available and updated through the collaborative efforts of industry, academia, and regulatory bodies.
Bioequivalence Metrics and Data Treatment
The most common data treatment involves analysis of variance (ANOVA). Geometric mean ratios and log transformed data are examined to test the hypothesis that the 90% confidence interval of extent (total exposure, area under the plasma concentration time curve from predose to the last measurable time point, AUC0 to last extrapolated to infinity) and the maximum concentration (peak exposure, the single point estimate of maximum observed concentration in the plasma concentration time curve) fall within the accepted limits of 80-125%.
In two published reports of the FDA archived studies-the mean values for the key parameters AUC and Cmax did not vary>4% (11,12). More recently, other data treatments have become popular, especially with specialized dosage forms, with drugs that are highly variable, with drugs having a long terminal half life, and/or with those drugs whose time to Cmax is considered important (i.e., certain pain medications). These other data treatments include partial area measurements and exposure metrics including Cmax/AUC. In all of these cases, the goal has been to err on the side of protecting the consumer while at times increasing risk to the manufacturer.
Hence, over the last 10-15Â years, considerable debate has occurred about the fundamental scientific rationale used to establish bioequivalence for some of these "special" cases. Although at times this debate may have seemed overly specific to some singular drugs or drug products, it has resulted in excellent fundamental research and discovery into the broader issues surrounding BE with extension to our understanding of therapeutic equivalence. This debate has also offered a platform for global interaction as it relates to the issues surrounding bioequivalence and the greater issues associated with harmonization of drug equivalence approaches on a global scale including choice of comparator (reference/brand) and windows of acceptance.
Relevant Statistical Considerations
Study PowerÂ Â
The conduct of a study that can truly attest to the bioequivalence of two drug products requires some prior knowledge of the performance of the products in the human body so that an appropriate number of test subjects can be enrolled and provide adequate power to test the hypothesis with a reasonable likelihood (i.e., at least 80%) that the two products are indeed bioequivalent. In fact, the alternate hypothesis that two products are not statistically significantly different leads to the conclusion that they are bioequivalent. The two criteria that are considered most important to understand are the inherent variability of the drug and the geometric mean ratio between the test and reference product. Both of these parameters can be determined through the conduct of a pilot study of 6-12 subjects. It should be borne in mind that such determinations will likely overestimate the number of subjects actually required for the final pivotal study. Such overestimation is usually more tolerable than the counter possibility of under sizing the study for obvious reasons.
75/75 RuleÂ Â
Considerable debate has ensued over the past 20Â years related to statistical testing and issues of bioequivalence. Some of this debate is still ongoing and will be further expanded later in this article when we discuss highly variable drugs. The original considerations related to how similar do the results of relative bioavailability between two formulations need to be before there is concern was one based on relative medical risk assessments.
In this regard, the biomedical community felt that unless there was greater than a 20-25% change in the biological system, it would really not pose a significant clinical risk that would invalidate the use of one therapeutic strategy versus another (13). This formed the basis for the 75/75 rule that stated that two formulations are equivalent if and only if at least 75% of the individuals being tested had ratios (of the various pharmacokinetic parameters obtained from the individual results) between the 75 %and 125% limits, and the study conducted has the statistical power to detect 20% difference between the two formulations. This statistical treatment of the data was really the first application wherein individual bioequivalence was being tested. The 75/75 rule lost most of its appeal when it was noted that both the test and reference formulations each have their own variability, and therefore, a confidence interval approach was more appropriate so that some consideration can be given to the differential variability between the test and reference products. Application of the 75/75 rule provides a greater probability of acceptance if the coefficient of variation for the reference product is smaller than the test product.
90% Confidence IntervalÂ Â
In July 1992, the guidance on "Statistical Procedures for Bioequivalence Studies Using a Standard Two-treatment Crossover Design" was released by FDA. In this document, the recommended statistical approach was that based on average bioequivalence (ABE) wherein the average values for the pharmacokinetic parameters were determined for the test and reference products and compared using a 90% confidence interval for the ratio of the averages using two one-sided t tests procedure (6). To establish bioequivalence, the calculated confidence interval should fall within a BE limit of 80-125% using logarithm transformed data (adopted since the concentration parameters Cmax and AUC may or may not be normally distributed). Again this limit was based on clinical judgment that a test product with bioavailability measures outside of this range should be denied market access since it posed an unacceptable risk to the consumer. The concept of using a confidence interval approach was really based on the fact that if the ratios of the two parameters of clinical interest (AUC, Cmax, etc.) are to be compared, each with their own variability that may or may not be randomly distributed, then such a comparison can only truly be done through a confidence interval approach.
SOME SPECIAL DRUG CLASSES, DOSAGE FORMS
Narrow Therapeutic Index Drugs
Scientists and regulatory agencies throughout the world have certainly recognized that the primary rules to establish bioequivalence for the majority of drug products will not work (or at least have significant short-comings) with some special drug classes or dosage forms. Those drugs which demonstrate considerable consumer risks such as the narrow therapeutic index drugs or critical dose drugs are clear examples where the fundamental biomedical basis used to establish the cornerstone of bioequivalence decision making, "â€¦ that unless there was greater than a 20-25% change in the biological system it would really not pose a significant riskâ€¦" may not hold. Perhaps tighter restrictions on these drugs would aid in the establishment of truly bioequivalent drug products within this class. For example recent discussion around the equivalency of antiepileptic drugs (AEDs) has created a resurgence in interest of examining individual bioequivalence (IBE) as a measure of product switchability since IBE does consider the within subject variability of both the test and reference products using the replicate study design (14). The lack of concordance between ABE and IBE with such data requires further research in developing appropriate approaches such as IBE.
The traditional 2-formulation, 2-period, 2-sequence cross-over design is used in the conventional average ABE. This approach gives no information about the within-subject variances associated with the test and reference products. For drug products that demonstrate low within-subject variability and demonstrate low variance between subjects, the traditional ABE approach works relatively well. However, when one is concerned with drug product performance within a given subject (drug interchangeability), such as that encountered with AEDs, a more relevant test criterion is individual bioequivalence (IBE) based on the concept of distance ratio, i.e., ratio of difference between test and reference and difference between reference and reference). This concept includes the consideration of the variance associated with each formulation. It has been quickly shown, however, that when the variance is equal for both formulations, there is no additional risk in accepting an inferior test formulation (15). However, when the variance for the reference formulation is smaller than for the test formulation, this risk is increased as a larger difference between the test and the reference product would show BE. In response to this, a scaled ABE (sABE) approach was presented that would provide an objective means of evaluating the variance of the reference formulation and provide a convenient method of examining the ABE between two formulations with widely disparate variances (16). This approach to evaluating BE is appropriate and becoming increasingly acceptable for those special drugs that are inherently highly variable.
Narrow therapeutic index drugs and narrowing the bioequivalence acceptance range
The mirror image of highly variable drugs with wide therapeutic index is narrow therapeutic-index drugs whose variability typically is low. If it is reasonable to widen the bioequivalence acceptance range for highly variable drugs with wide therapeutic index, it seems equally reasonable to narrow the bioequivalence acceptance range for drugs with low variability and narrow therapeutic index. Such a narrowing of the bioequivalence acceptance range for narrow therapeutic-index drugs could increase assurance, particularly of the safety of generics in this drug class, without imposing an undue burden, financial or otherwise, on the sponsors of bioequivalence studies for such drugs. Indeed, the new European bioequivalence guideline21 envisages that 'in specific cases of products with narrow therapeutic index, the acceptance interval for AUC should be tightened to 90-111.11%'.
Highly Variable Drugs and Drug Products
This class of drug and drug products exhibits the influence of intrinsic variability of a drug and drug product on the assessment of bioequivalence. The question arises how best to assess bioequivalence of such drug products? It is often required to test with very large numbers of volunteers to achieve sufficient power in order to document bioequivalence. This increases producers' risk in terms of cost and involves unnecessary human testing without concomitantly decreasing consumers' risk.
In fact, the high variability of this class of drugs and yet their inherent safe use within the population is generally manifested in the fact that such drugs generally have an extremely wide therapeutic window. As a consequence, the consumer risk is very low, and it has been proposed that such drugs should have a wider window of acceptance in order to allow conventional approaches towards testing of bioequivalence to be applicable. However, as the consensus for such approaches could not be achieved for various reasons, these concepts were abandoned initially with discussion consistently ending with regulatory opinion not accepting the idea of widening the window of acceptance but certainly now agreeing that an approach based on scaling of ABE may be more reasonable since it relies on the performance of the marketed formulation in helping to set the acceptance window. This approach also stipulates that the point estimate of Cmax must fall within a geometric mean ratio between the test and reference product of 0.8-1.25 (17).
Highly variable drugs and widening the bioequivalence acceptance range, or scaling
What about the concerns that the producers of drugs might have with the bioequivalence concept? It is well known that the conventional approach of average bioequivalence, with an
80-125% acceptance range, can make it very difficult to show bioequivalence for drug products with highly variable bioavailability (so-called 'highly variable drugs' or drug products). For such drugs, sample sizes of 100 subjects and higher can be required to demonstrate bioequivalence. 'A feature of the difficulties involving the determination of bioequivalence of highly variable drugs is that, under typical conditions, a drug product may not be found bioequivalent to itself.'17 This is clearly unsatisfactory, in particular to producers of drug products who face inordinate costs when conducting bioequivalence studies for highly variable drugs. A potential solution to the problem of highly variable drugs is suggested by the observation that most highly variable drugs
have a wide therapeutic index. If such a drug indeed has a wide therapeutic index, it should be clinically acceptable to widen the bioequivalence acceptance range for it. Various ways of
appropriately widening the acceptance range for highly variable/ wide therapeutic-index drugs have recently been discussed and investigated.17,18 The approach that currently seems to be favoured by FDA scientists and researchers in the field is that of scaled average bioequivalence.17,18 Without going into the methodological and statistical details, the scaled average bioequivalence concept effectively widens the conventional acceptance range for bioequivalence, namely 80-125%, proportionally to the within-subject standard deviation
of the bioavailability of the reference product. Therefore, the more variable the bioavailability of the reference drug product, the wider the effective acceptance range for bioequivalence. Interestingly, the basic concept of scaling the bioequivalence criterion had already been proposed early in the development of characteristics for individual bioequivalence,8,19,20 so that the
scaled average bioequivalence concept can be viewed as another by-product of the research into individual bioequivalence. While there still are some problems with the scaled average bioequivalence concept,17 at present it seems to be the most promising and practical approach for handling the problem of highly variable drugs in bioequivalence.
Other Drug Products
Some examples of other special formulations that have been given scientific and regulatory consideration are those involving chiral drugs, drugs that are present as endogenous entities, drugs exhibiting polymorphic metabolism, poorly absorbed or nonabsorbed drugs, antiepileptic drugs, and cytotoxic drugs. Each of these presents unique difficulties when it comes to establishing the study design, the proper analyte for measurement or the appropriate marker and the statistical testing for truly establishing bioequivalence and hence therapeutic equivalence. Specific interest groups involving international scientists have debated and discussed many of these issues. Most notably FIP through its scientific interest group on bioavailability and bioequivalence was formed in 1994 to be a global platform for such discussion. Readers are directed to the Bio-International Conferences held to discuss and resolve complex issues in BA/BE in 1989, 1992, 1994, 1996, 1999, 2001, 2003, 2005, and 2008. In addition, regulatory science workshops (see URL http://www.fip.nl/www/index.php?page=pharmacy_sciences&pharmacy_sciences=sciences_bioavail) have been hosted by FIP in cooperation with AAPS aimed at providing solid knowledge and competences to participants from the global community related to BA/BE.
In addition to the special drug classes mentioned above, we also note that certain dosage forms present unique challenges for the establishment of bioequivalence. Some of these include drugs administered transdermally, respiratory drug products, pulsatile delivery systems, complex intravenous systems, drugs delivered by noninvasive routes, and biotechnology drug products that require the introduction of the concept of biosimilarity as opposed to bioequivalence.
There are other issues that need resolution when using pharmacokinetic endpoint in BE studies. Some of these issues are:
Bioequivalence of very poorly formulated brand product-when redosing of the same batch of the drug product does not demonstrate bioequivalence.
Similarity between plasma concentrations versus time profiles-single peak versus multiple peaks, immediate release and modified release combination formulation.
Should TMAX for the generic and the brand products be same/similar, particularly when the drug is intended for chronic use for a considerable length of time?
Should food bioequivalence study be mandatory for all drug products to assess potential dose dumping or reduced bioavailability?
What is the appropriate study population in terms of healthy subjects or patient population, effect of gender, age, ethnicity as it relates to BE study with food?
Harmonizing the Approaches to Bioequivalence Assessment
Harmonization can be understood as collaborative efforts by different regions to reach mutual agreement and eliminate differences and inconsistencies by using the same uniform standards or systems. In BE assessment, this can be done by developing one uniform guideline to be implemented by different regions. Among the major benefits of harmonization are drugs can be manufactured and marketed in different parts and regions of the world and would have best quality, safety and efficacy. Other benefits include: preventing repetition of clinical trials and animal testing while optimizing safety and effectiveness, empowering the regulatory assessment process for new drug applications and minimizing the cost of drug development (ICH, 2012). Therefore, by reaching a consensus among the regulatory bodies of different regions, all parties i.e. the users as well as producers stand to gain.
Substantial advancement on regional level has been made towards harmonization over the years. Regulatory authorities of different countries have restructured their own regulatory requirements in the process of collaborating with each other in order to harmonize the regulatory requirements. Some organizations such as, the International Conference on Harmonization (ICH), a conglomerate of regulatory authorities from USA, Europe, and Japan, have commenced remarkable efforts toward harmonization. Conceived in 1990, ICH's mission is to make suggestions towards reaching harmonisation in the implementation of technical guidelines and requirements for pharmaceutical product registration, thus avoiding duplication of clinical trials and experimentations in the development of new human medicines (ICH, 2012).Other national and international organizations that are also involved in this effort include:
World Health Organization
Global Harmonization Task Force
European Agency for the Evaluation of Medicinal Products (EMEA)
Therapeutic Good Administration-Australia
Association of Southeast Asian Nations Consultative Committee for Standards and Quality
Generic drugs usage has been one of the strategies to alleviate the burden of the healthcare costs. Due to reported cases of adverse effects caused by 'bio-inequivalent' generic drugs, the concept of BE started to emerge and has been implemented by the pharmaceutical industry and regulatory authorities for over 30Â years. BE has become one of the prerequisite for approval since it ensures interchangeability or therapeutic equivalent. Since then, thousands of generic drugs have been manufactured in accordance with the BE requirements. The fundamental bioequivalence assumption, i.e. bioequivalence inferred therapeutic equivalence has survived various scrutiny and critics. No evidence of therapeutic inequivalent of proven bioequivalent drugs that was properly manufactured could be found. All the involved parties, namely, regulatory authorities, pharmaceutical industries, and academia are continuing their efforts to devise more efficient and scientifically valid bioequivalence testing methods.