Study On Antimalarial Drug Resistance Biology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

usually recommended but within the limits of tolerance of the subject. The definition was slightly changed decades later to reflect the fact that the drug must actually reach the parasite or erythrocyte and act for the duration it takes for normal drug action. Drug resistance has been attributed to the practice of constantly taking sub-curative doses of an antimalarial drug, which only serves to eliminate the most sensitive parasites in the blood, allowing resistant parasites to propagate. Longer half-life drugs such as CQ have an extended elimination period from the body, during which time malarial parasites are exposed to sub-therapeutic levels of the drug in the blood. The problem of taking subcurativedoses is compounded by the availability of cheap drugs in many countries, which can be problematic for adherence to regimens (side-effects) and the emergence of drug resistance. There will always be a small number of parasites not eliminated by medications, but host immunity can usually clear the infection. However, factors decreasing the effectiveness of the immune system can increase the survivorship of parasites, lending towards resistance. In certain malarious areas (Southeast Asia),

parasites will repeatedly cycle through non-immune populations, causing significant

morbidity and intensifying resistance. In addition, it has been reported that a synergistic

effect exists between P. falciparum and certain Anopheles spp., which can produce a

biological advantage favoring resistant parasites (Bloland P, 2001). The genetic events that lead to resistance to an antimalarial drug are usually spontaneous mutations or changes in copy number of genes relating to the drug target of the parasite (White, 2004). These events confer reduced sensitivity to a particular drug, or a whole drug class. Over time, resistance becomes stable in the population and it can persist even after drug pressure is removed. Among the species causing human malaria, drug resistance has been reported and characterized the most in P. falciparum, although resistance to antimalarials has been documented for P. malariae and P. vivax, as well. In P. falciparum, resistance has been observed in all currently used antimalarials (including artemisinin derivatives). The geographical distributions and rates of spread have varied considerably (Fig. 1.4). P.

vivax has developed resistance rapidly to SP in many areas, while resistance to CQ is confined largely to Indonesia, Papua New Guinea, Timor-Leste and other parts of

Oceania. There are also reports of CQ resistance from Brazil, Peru, India, and Africa

(Fig. 1.4). However, P. vivax remains sensitive to CQ in most of South-East Asia, the

Indian subcontinent, the Korean peninsula, the Middle East, north-east Africa, and most

of South and Central America (Organisation, 2010).

In response to the increasing burden of malaria caused by P. falciparum resistance to the standard antimalarial medicines, World Health Organization (WHO) recommended the use of combination therapies, ideally those containing artemisinin derivatives in countries where P. falciparum malaria is resistant to the conventional antimalarial medicines chloroquine, SP, and amodiaquine28. Unfortunately, even artemisinin derivatives, the only drugs that had been fully effective against P. falciparum until very recently, seem to be loosing their efficacy along the border between Cambodia and Thailand (Lim et al., 2009; Noedl et al., 2009).

Assessment of drug resistance monitoring

Drug surveillance is necessary to ensure correct management of clinical cases and early detection of changing patterns of resistance to assure that national treatment policies remain effective (W.H.O, 2003). Three approaches have been used to evaluate the efficacy of an antimalarial drug: clinical in vivo studies (also known as therapeutic efficacy testing), in vitro susceptibility testing, and more recently, molecular markers. In discussing these different approaches, it is fundamental to differentiate intrinsic parasite resistance from decreased clinical efficacy. The term resistance means the failure of a drug to prevent parasite growth in culture, at defined drug concentrations, and in the absence of the host immune response. Alterations in efficacy are detected through clinical in vivo studies in which parasite intrinsic susceptibility is one of several factors that determine the outcome (Laufer. M. K, 2009).

In vivo measures of drug resistance.

The therapeutic efficacy test remains the "gold standard" method for detecting drug resistance (W.H.O, 2003). These tests reveal the exact biological nature of drug treatment response. This response involves a complex interaction between the drugs, the parasites, and the host response (i.e. the therapeutic response of currently circulating parasites infecting the current population in which the drug will be used), while in vitro tests measure only the interaction between the parasites and the drugs (Talisuna et al., 2004). In vivo tests involve the treatment of symptomatic P. falciparum infected patients with a standard dose of an antimalarial drug and subsequent follow-up of clinical and parasitological outcomes of treatment during a fixed period. The WHO developed a scheme for estimating the degree of antimalarial drug resistance, which involves studying patient parasitemia over 28 days. The in vivo response to drugs was originally defined by WHO in terms of parasite clearance (sensitive [S] and three degrees of resistance [RI, RII, RIII]) 137,155. Blood smears were taken on days 2, 7 and 28 after initiation of antimalarial treatment to grade the resistance as RI-RIII. Sensitivity was classified as reduction of initial parasitemia by ≥75% on day 2 with smears negative for malaria parasites from day 7 to 28 (end of follow up, but could be longer if drugs with longer half lives are used like mefloquine). RI response was classified as initial clearance of parasitemia with negative smear on day 7, but recrudescence on 8th day or more days after treatment started. RII response was classified as an initial clearance or substantial reduction of parasitemia (<25% of the initial count on day 2) but with persistence or recrudescence of parasitemia during days 4-7 after treatment. RIII was classified as no significant reduction of parasitemia at all 28 days after treatment. This scheme of classification still remains valid for areas with low or no malaria transmission, but it is difficult to apply to areas with intensive transmission, because of the chance that new infection can be mistaken for recrudescence (which can also happen after 28 days). Other drawbacks of this method included the fact that RII was too broad of a category, practical difficulties in following a patient for 28 days, and the intermittent nature of parasitemia in the blood of infected patients. Therefore, WHO introduced a modified protocol in 1996 based on clinical outcome targeted at a practical assessment of therapeutic responses in areas with intense transmission, where parasitemia in the absence of clinical signs or symptoms is common (Roll Back Malaria Partnership and World Health Organization, 2001; Wongsrichanalai et al., 2002). The modified classification has established categories of Early treatment failure (ETF) (aggravation/persistence of clinical symptoms in the presence of parasitemia during the first 3 days of follow-up), Late treatment failure (LTF) (reappearance of symptoms in the presence of parasitemia during days 4-14 of follow-up), and Adequate clinical response (ACR) (Absence of parasitemia on day 14 irrespective of fever, or absence of clinical symptoms irrespective of parasitemia, in patients not meeting ETF or LTF criteria). The WHO has continued to update therapeutic efficacy protocols for high transmission areas and validate the therapeutic efficacy protocol for low-to-moderate transmission areas on the basis of feedback from countries and scientific recommendations. Recently, the WHO modified the existing protocol to include applications of the same definitions of treatment responses at all levels of malaria transmission, with slight adjustment of patient inclusion criteria; administration of rescue treatment to patients with parasitological treatment failure at all levels of malaria transmission; requirement for 28 or 42 days of follow-up as a standard, depending on the medicine tested; and requirement for genotyping by PCR to distinguish between recrudescence and re-infection. The 28-day follow-up is recommended as the minimum standard to allow national malaria control programs to capture most failures with most medicines, except mefloquine and piperaquine, for which the minimum follow-up should be 42 days 192. There are now set definitions of treatment response that are used in all areas of malaria transmission. The ETF definition has been modified to the following: danger signs or severe malaria on day 1, 2 or 3, in the presence of parasitemia; parasitemia on day 2 higher than on day 0, irrespective of axillary temperature; parasitemia on day 3 with axillary temperature ≥ 37.5 °C; and parasitemia on day 3 ≥ 25% of count on day 0. Late clinical failure (LCF) is defined as: severe malaria in the presence of parasitemia on any day between day 4 and day 28 (day 42) in patients who did not previously meet any of the criteria of ETF; and presence of parasitemia on any day between day 4 and day 28 (day 42) with axillary temperature ≥ 37.5 °C in patients who did not previously meet any of the criteria of early treatment failure. Late parasitological failure (LPF) is defined as the presence of parasitemia on any day between day 7 and day 28 (day 42) with axillary temperature < 37.5 °C in patients who did not previously meet any of the criteria of early treatment failure or late clinical failure. Adequate clinical and parasitological response (ACPR) is defined as an absence of parasitemia on day 28 (day 42), irrespective of axillary temperature, in patients who did not previously meet any of the criteria of early treatment failure, late clinical failure or late parasitological failure. These tests provide decision-makers with a simple, readily comprehensible indicator of the efficacy of an antimalarial drug with reduced requirement for equipment and supplies (W.H.O, 2003).

In vitro tests

The in vivo method has allowed the thresholds of treatment failure that are crucial for adjusting antimalarial drug policies to be determined but it not sufficient on its own to confirm drug resistance(W.H.O, 2003). To support the evidence of a failing antimalarial, an in vitro test can be used providing a more accurate measure of drug sensitivity under controlled experimental conditions, which removes variables such as patient immune status, re-infection and pharmacokinetics. In vitro tests allow a more objective approach to parasite resistance, since in these studies the parasite will be in direct contact with incremental drug concentrations. Several tests can be carried out with the same sample, and several drugs can be studied at the same time, including drugs that are still at the experimental stage(W.H.O, 2003). Several in vitro tests exist, which differ with respect to the measure effect and the duration of exposure to the test compound. These include microscopic examination of blood films for the WHO mark III test (inhibition of maturation or replication; Giemsa-stained), the radioisotopic test (incorporation of hypoxantine) and the enzyme-linked immunosorbent assay with antibodies directed against Plasmodium lactate dehydrogenase or histidine-rich protein II(Olliaro, 2005). The importance of these tests has become evident with the increasing use of combination therapy, since they can be used to monitor susceptibility to each drug in a combination. It is often impossible to perform in vivo tests for each component, due to ethical problems, non-availability of the drug as monotherapy and the need to study a large number of patients (Vestergaard, Ringwald, 2007). Although this method is useful, its application is limited. In vitro methods require trained personnel with access to a laboratory capable to perform culture of malaria parasites. Even when provided with such facilities, it is often difficult to establish cultures and not all the primary parasites will adapt to in vitro culture conditions (LeRoux et al., 2009). Moreover, in part because these tests remove the host factors, the correlation between results of in vitro and in vivo tests is not always reliable and is not well understood. In vitro drug sensitivity data may provide early evidence of increasing drug tolerance prior to parasitological/clinical resistance. Whereas, this test may give misleading indications if the alterations in sensitivity are so small that they do not result in parasitological/clinical resistance (Hastings et al., 2007). These limitations of in vivo and in vitro methods have led to the search for genetic markers of resistance.