Predicting Effects of Environmental Contaminants
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
1.1. Debunking some chemical myths…
In October 2008, the Royal Society of Chemistry announced they were offering £1 million to the first member of the public that could bring a 100% chemical free material. This attempt to reclaim the word ‘chemical' from the advertising and marketing industries that use it as a synonym for poison was a reaction to a decision of the Advertising Standards Authority to defend an advert perpetuating the myths that natural products were chemical free (Edwards 2008). Indeed, no material regardless of its origin is chemical free. A related common misconception is that chemicals made by nature are intrinsically good and, conversely, those manufactured by man are bad (Ottoboni 1991). There are many examples of toxic compounds produced by algae or other micro-organisms, venomous animals and plants, or even examples of environmental harm resulting from the presence of relatively benign natural compounds either in unexpected places or in unexpected quantities. It is therefore of prime importance to define what is meant by ‘chemical' when referring to chemical hazards in this chapter and the rest of this book. The correct term to describe a chemical compound an organism may be exposed to, whether of natural or synthetic origins, is xenobiotic, i.e. a substance foreign to an organism (the term has also been used for transplants). A xenobiotic can be defined as a chemical which is found in an organism but which is not normally produced or expected to be present in it. It can also cover substances which are present in much higher concentrations than are usual.
A grasp of some of the fundamental principles of the scientific disciplines that underlie the characterisation of effects associated with exposure to a xenobiotic is required in order to understand the potential consequences of the presence of pollutants in the environment and critically appraise the scientific evidence. This chapter will attempt to briefly summarise some important concepts of basic toxicology and environmental epidemiology relevant in this context.
1.2. Concepts of Fundamental Toxicology
Toxicology is the science of poisons. A poison is commonly defined as ‘any substance that can cause an adverse effect as a result of a physicochemical interaction with living tissue'(Duffus 2006). The use of poisons is as old as the human race, as a method of hunting or warfare as well as murder, suicide or execution. The evolution of this scientific discipline cannot be separated from the evolution of pharmacology, or the science of cures. Theophrastus Phillippus Aureolus Bombastus von Hohenheim, more commonly known as Paracelsus (1493-1541), a physician contemporary of Copernicus, Martin Luther and da Vinci, is widely considered as the father of toxicology. He challenged the ancient concepts of medicine based on the balance of the four humours (blood, phlegm, yellow and black bile) associated with the four elements and believed illness occurred when an organ failed and poisons accumulated. This use of chemistry and chemical analogies was particularly offensive to his contemporary medical establishment. He is famously credited the following quote that still underlies present-day toxicology.
In other words, all substances are potential poisons since all can cause injury or death following excessive exposure. Conversely, this statement implies that all chemicals can be used safely if handled with appropriate precautions and exposure is kept below a defined limit, at which risk is considered tolerable (Duffus 2006). The concepts both of tolerable risk and adverse effect illustrate the value judgements embedded in an otherwise scientific discipline relying on observable, measurable empirical evidence. What is considered abnormal or undesirable is dictated by society rather than science. Any change from the normal state is not necessarily an adverse effect even if statistically significant. An effect may be considered harmful if it causes damage, irreversible change or increased susceptibility to other stresses, including infectious disease. The stage of development or state of health of the organism may also have an influence on the degree of harm.
1.2.1. Routes of exposure
Toxicity will vary depending on the route of exposure. There are three routes via which exposure to environmental contaminants may occur;
- Skin adsorption
Direct injection may be used in environmental toxicity testing. Toxic and pharmaceutical agents generally produce the most rapid response and greatest effect when given intravenously, directly into the bloodstream. A descending order of effectiveness for environmental exposure routes would be inhalation, ingestion and skin adsorption.
Oral toxicity is most relevant for substances that might be ingested with food or drinks. Whilst it could be argued that this is generally under an individual's control, there are complex issues regarding information both about the occurrence of substances in food or water and the current state-of-knowledge about associated harmful effects.
Gases, vapours and dusts or other airborne particles are inhaled involuntarily (with the infamous exception of smoking). The inhalation of solid particles depends upon their size and shape. In general, the smaller the particle, the further into the respiratory tract it can go. A large proportion of airborne particles breathed through the mouth or cleared by the cilia of the lungs can enter the gut.
Dermal exposure generally requires direct and prolonged contact with the skin. The skin acts as a very effective barrier against many external toxicants, but because of its great surface area (1.5-2 m2), some of the many diverse substances it comes in contact with may still elicit topical or systemic effects (Williams and Roberts 2000). If dermal exposure is often most relevant in occupational settings, it may nonetheless be pertinent in relation to bathing waters (ingestion is an important route of exposure in this context). Voluntary dermal exposure related to the use of cosmetics raises the same questions regarding the adequate communication of current knowledge about potential effects as those related to food.
1.2.2. Duration of exposure
The toxic response will also depend on the duration and frequency of exposure. The effect of a single dose of a chemical may be severe effects whilst the same dose total dose given at several intervals may have little if any effect. An example would be to compare the effects of drinking four beers in one evening to those of drinking four beers in four days. Exposure duration is generally divided into four broad categories; acute, sub-acute, sub-chronic and chronic. Acute exposure to a chemical usually refers to a single exposure event or repeated exposures over a duration of less than 24 hours. Sub-acute exposure to a chemical refers to repeated exposures for 1 month or less, sub-chronic exposure to continuous or repeated exposures for 1 to 3 months or approximately 10% of an experimental species life time and chronic exposure for more than 3 months, usually 6 months to 2 years in rodents (Eaton and Klaassen 2001). Chronic exposure studies are designed to assess the cumulative toxicity of chemicals with potential lifetime exposure in humans. In real exposure situations, it is generally very difficult to ascertain with any certainty the frequency and duration of exposure but the same terms are used.
For acute effects, the time component of the dose is not important as a high dose is responsible for these effects. However if acute exposure to agents that are rapidly absorbed is likely to induce immediate toxic effects, it does not rule out the possibility of delayed effects that are not necessarily similar to those associated with chronic exposure, e.g. latency between the onset of certain cancers and exposure to a carcinogenic substance. It may be worth here mentioning the fact that the effect of exposure to a toxic agent may be entirely dependent on the timing of exposure, in other words long-term effects as a result of exposure to a toxic agent during a critically sensitive stage of development may differ widely to those seen if an adult organism is exposed to the same substance. Acute effects are almost always the result of accidents. Otherwise, they may result from criminal poisoning or self-poisoning (suicide). Conversely, whilst chronic exposure to a toxic agent is generally associated with long-term low-level chronic effects, this does not preclude the possibility of some immediate (acute) effects after each administration. These concepts are closely related to the mechanisms of metabolic degradation and excretion of ingested substances and are best illustrated by 1.1.
Line A. chemical with very slow elimination. Line B. chemical with a rate of elimination equal to frequency of dosing. Line C. Rate of elimination faster than the dosing frequency. Blue-shaded area is representative of the concentration at the target site necessary to elicit a toxic response.
1.2.3. Mechanisms of toxicity
The interaction of a foreign compound with a biological system is two-fold: there is the effect of the organism on the compound (toxicokinetics) and the effect of the compound on the organism (toxicodynamics).
Toxicokinetics relate to the delivery of the compound to its site of action, including absorption (transfer from the site of administration into the general circulation), distribution (via the general circulation into and out of the tissues), and elimination (from general circulation by metabolism or excretion). The target tissue refers to the tissue where a toxicant exerts its effect, and is not necessarily where the concentration of a toxic substance is higher. Many halogenated compounds such as polychlorinated biphenyls (PCBs) or flame retardants such as polybrominated diphenyl ethers (PBDEs) are known to bioaccumulate in body fat stores. Whether such sequestration processes are actually protective to the individual organisms, i.e. by lowering the concentration of the toxicant at the site of action is not clear (O'Flaherty 2000). In an ecological context however, such bioaccumulation may serve as an indirect route of exposure for organisms at higher trophic levels, thereby potentially contributing to biomagnification through the food chain.
Absorption of any compound that has not been directed intravenously injected will entail transfer across membrane barriers before it reaches the systemic circulation, and the efficiency of absorption processes is highly dependent on the route of exposure.
It is also important to note that distribution and elimination, although often considered separately, take place simultaneously. Elimination itself comprises of two kinds of processes, excretion and biotransformation, that are also taking place simultaneously. Elimination and distribution are not independent of each other as effective elimination of a compounds will prevent its distribution in peripheral tissues, whilst conversely, wide distribution of a compound will impede its excretion (O'Flaherty 2000). Kinetic models attempt to predict the concentration of a toxicant at the target site from the administered dose. If often the ultimate toxicant, i.e. the chemical species that induces structural or functional alterations resulting in toxicity, is the compound administered (parent compound), it can also be a metabolite of the parent compound generated by biotransformation processes, i.e. toxication rather than detoxication (Timbrell 2000; Gregus and Klaassen 2001). The liver and kidneys are the most important excretory organs for non-volatile substances, whilst the lungs are active in the excretion of volatile compounds and gases. Other routes of excretion include the skin, hair, sweat, nails and milk. Milk may be a major route of excretion for lipophilic chemicals due to its high fat content (O'Flaherty 2000).
Toxicodynamics is the study of toxic response at the site of action, including the reactions with and binding to cell constituents, and the biochemical and physiological consequences of these actions. Such consequences may therefore be manifested and observed at the molecular or cellular levels, at the target organ or on the whole organism. Therefore, although toxic responses have a biochemical basis, the study of toxic response is generally subdivided either depending on the organ on which toxicity is observed, including hepatotoxicity (liver), nephrotoxicity (kidney), neurotoxicity (nervous system), pulmonotoxicity (lung) or depending on the type of toxic response, including teratogenicity (abnormalities of physiological development), immunotoxicity (immune system impairment), mutagenicity (damage of genetic material), carcinogenicity (cancer causation or promotion). The choice of the toxicity endpoint to observe in experimental toxicity testing is therefore of critical importance. In recent years, rapid advances of biochemical sciences and technology have resulted in the development of bioassay techniques that can contribute invaluable information regarding toxicity mechanisms at the cellular and molecular level. However, the extrapolation of such information to predict effects in an intact organism for the purpose of risk assessment is still in its infancy (Gundert -Remy et al. 2005).
1.2.4. Dose-response relationships
83A7DC81The theory of dose-response relationships is based on the assumptions that the activity of a substance is not an inherent quality but depends on the dose an organism is exposed to, i.e. all substances are inactive below a certain threshold and active over that threshold, and that dose-response relationships are monotonic, the response rises with the dose. Toxicity may be detected either as all-or-nothing phenomenon such as the death of the organism or as a graded response such as the hypertrophy of a specific organ. The dose-response relationship involves correlating the severity of the response with exposure (the dose). Dose-response relationships for all-or-nothing (quantal) responses are typically S-shaped and this reflects the fact that sensitivity of individuals in a population generally exhibits a normal or Gaussian distribution. Biological variation in susceptibility, with fewer individuals being either hypersusceptible or resistant at both end of the curve and the majority responding between these two extremes, gives rise to a bell-shaped normal frequency distribution. When plotted as a cumulative frequency distribution, a sigmoid dose-response curve is observed ( 1.2).
Studying dose response, and developing dose response models, is central to determining "safe" and "hazardous" levels.
The simplest measure of toxicity is lethality and determination of the median lethal dose, the LD50 is usually the first toxicological test performed with new substances. The LD50 is the dose at which a substance is expected to cause the death of half of the experimental animals and it is derived statistically from dose-response curves (Eaton and Klaassen 2001). LD50 values are the standard for comparison of acute toxicity between chemical compounds and between species. Some values are given in Table 1.1. It is important to note that the higher the LD50, the less toxic the compound.
Similarly, the EC50, the median effective dose, is the quantity of the chemical that is estimated to have an effect in 50% of the organisms. However, median doses alone are not very informative, as they do not convey any information on the shape of the dose-response curve. This is best illustrated by 1.3. While toxicant A appears (always) more toxic than toxicant B on the basis of its lower LD50, toxicant B will start affecting organisms at lower doses (lower threshold) while the steeper slope for the dose-response curve for toxicant A means that once individuals become overexposed (exceed the threshold dose), the increase in response occurs over much smaller increments in dose.
Low dose responses
The classical paradigm for extrapolating dose-response relationships at low doses is based on the concept of threshold for non-carcinogens, whereas it assumes that there is no threshold for carcinogenic responses and a linear relationship is hypothesised (s 1.4 and 1.5).
The NOAEL (No Observed Adverse Effect Level) is the exposure level at which there is no statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The NOEL for the most sensitive test species and the most sensitive indicator of toxicity is usually employed for regulatory purposes. The LOAEL (Lowest Observed Adverse Effect Level) is the lowest exposure level at which there is a statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The main criticism of NOAEL and LOAEL is that there are dependent on study design, i.e. the dose groups selected and the number of individuals in each group. Statistical methods of deriving the concentration that produces a specific effect ECx, or a benchmark dose (BMD), the statistical lower confidence limit on the dose that produces a defined response (the benchmark response or BMR), are increasingly preferred.
To understand the risk that environmental contaminants pose to human health requires the extrapolation of limited data from animal experimental studies to the low doses critically encountered in the environment. Such extrapolation of dose-response relationships at low doses is the source of much controversy. Recent advances in the statistical analysis of very large populations exposed to ambient concentrations of environmental pollutants have however not observed thresholds for cancer or non-cancer outcomes (White et al. 2009). The actions of chemical agents are triggered by complex molecular and cellular events that may lead to cancer and non-cancer outcomes in an organism. These processes may be linear or non-linear at an individual level. A thorough understanding of critical steps in a toxic process may help refine current assumptions about thresholds (Boobis et al. 2009). The dose-response curve however describes the response or variation in sensitivity of a population. Biological and statistical attributes such as population variability, additivity to pre-existing conditions or diseases induced at background exposure will tend to smooth and linearise the dose-response relationship, obscuring individual thresholds.
Dose-response relationships for substances that are essential for normal physiological function and survival are actually U-shaped. At very low doses, adverse effects are observed due to a deficiency. As the dose of such an essential nutrient is increased, the adverse effect is no longer detected and the organism can function normally in a state of homeostasis. Abnormally high doses however, can give rise to a toxic response. This response may be qualitatively different and the toxic endpoint measured at very low and very high doses is not necessarily the same.
There is evidence that nonessential substances may also impart an effect at very low doses ( 1.6). Some authors have argued that hormesis ought to be the default assumption in the risk assessment of toxic substances (Calabrese and Baldwin 2003). Whether such low dose effects should be considered stimulatory or beneficial is controversial. Further, potential implications of the concept of hormesis for the risk management of the combinations of the wide variety of environmental contaminants present at low doses that individuals with variable sensitivity may be exposed to are at best unclear.
1.2.5. Chemical interactions
In regulatory hazard assessment, chemical hazard are typically considered on a compound by compound basis, the possibility of chemical interactions being accounted for by the use of safety or uncertainty factors. Mixture effects still represent a challenge for the risk management of chemicals in the environment, as the presence of one chemical may alter the response to another chemical. The simplest interaction is additivity: the effect of two or more chemicals acting together is equivalent to the sum of the effects of each chemical in the mixture when acting independently. Synergism is more complex and describes a situation when the presence of both chemicals causes an effect that is greater than the sum of their effects when acting alone. In potentiation, a substance that does not produce specific toxicity on its own increases the toxicity of another substance when both are present. Antagonism is the principle upon which antidotes are based whereby a chemical can reduce the harm caused by a toxicant (James et al. 2000; Duffus 2006). Mathematical illustrations and examples of known chemical interactions are given in Table 1.2.
Table 1.2. Mathematical representations of chemical interactions (reproduced from James et al., 2000)
Hypothetical mathematical illustration
2 + 3 = 5
2 + 3 = 20
Cigarette smoking + asbestos
2 + 0 = 10
Alcohol + carbon tetrachloride
6 + 6 = 8 or
5 + (-5) = 0 or
10 + 0 = 2
Toluene + benzene
Caffeine + alcohol
Dimercaprol + mercury
There are four main ways in which chemicals may interact (James et al. 2000);
1. Functional: both chemicals have an effect on the same physiological function.
2. Chemical: a chemical reaction between the two compounds affects the toxicity of one or both compounds.
3. Dispositional: the absorption, metabolism, distribution or excretion of one substance is increased or decreased by the presence of the other.
4. Receptor-mediated: when two chemicals have differing affinity and activity for the same receptor, competition for the receptor will modify the overall effect.
1.2.6. Relevance of animal models
A further complication in the extrapolation of the results of toxicological experimental studies to humans, or indeed other untested species, is related to the anatomical, physiological and biochemical differences between species. This paradoxically requires some previous knowledge of the mechanism of toxicity of a chemical and comparative physiology of different test species. When adverse effects are detected in screening tests, these should be interpreted with the relevance of the animal model chosen in mind. For the derivation of safe levels, safety or uncertainty factors are again usually applied to account for the uncertainty surrounding inter-species differences (James et al. 2000; Sullivan 2006).
1.2.7. A few words about doses
When discussing dose-response, it is also important to understand which dose is being referred to and differentiate between concentrations measured in environmental media and the concentration that will illicit an adverse effect at the target organ or tissue. The exposure dose in a toxicological testing setting is generally known or can be readily derived or measured from concentrations in media and average consumption (of food or water for example) ( 1.7.). Whilst toxicokinetics help to develop an understanding of the relationship between the internal dose and a known exposure dose, relating concentrations in environmental media to the actual exposure dose, often via multiple pathways, is in the realm of exposure assessment.
1.2.8. Other hazard characterisation criteria
Before continuing further, it is important to clarify the difference between hazard and risk. Hazard is defined as the potential to produce harm, it is therefore an inherent qualitative attribute of a given chemical substance. Risk on the other hand is a quantitative measure of the magnitude of the hazard and the probability of it being realised. Hazard assessment is therefore the first step of risk assessment, followed by exposure assessment and finally risk characterization. Toxicity is not the sole criterion evaluated for hazard characterisation purposes.
Some chemicals have been found in the tissues of animals in the arctic for example, where these substances of concern have never been used or produced. This realization that some pollutants were able to travel far distances across national borders because of their persistence, and bioaccumulate through the food web, led to the consideration of such inherent properties of organic compounds alongside their toxicity for the purpose of hazard characterisation.
Persistence is the result of resistance to environmental degradation mechanisms such as hydrolysis, photodegradation and biodegradation. Hydrolysis only occurs in the presence of water, photodegradation in the presence of UV light and biodegradation is primarily carried out by micro-organisms. Degradation is related to water solubility, itself inversely related to lipid solubility, therefore persistence tends to be correlated to lipid solubility (Francis 1994). The persistence of inorganic substances has proven more difficult to define as they cannot be degraded to carbon and water.
Chemicals may accumulate in environmental compartments and constitute environmental sinks that could be re-mobilised and lead to effects. Further, whilst substances may accumulate in one species without adverse effects, it may be toxic to its predator(s). Bioconcentration refers to accumulation of a chemical from its surrounding environment rather than specifically through food uptake. Conversely, biomagnification refers to uptake from food without consideration for uptake through the body surface. Bioaccumulation integrates both paths, surrounding medium and food. Ecological magnification refers to an increase in concentration through the food web from lower to higher trophic levels. Again, accumulation of organic compounds generally involves transfer from a hydrophilic to a hydrophobic phase and correlates well with the n-octanol/water partition coefficient (Herrchen 2006).
Persistence and bioaccumulation of a substance is evaluated by standardised OECD tests. Criteria for the identification of persistent, bioaccumulative, and toxic substances (PBT), and very persistent and very bioaccumulative substances (vPvB) as defined in Annex XIII of the European Directive on the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) (Union 2006) are given in table 1.3. To be classified as a PBT or vPvB substance, a given compound must fulfil all criteria.
Table 1.3. REACH criteria for identifying PBT and vPvB chemicals
- Half-life > 60 days in marine water
- Half-life > 60 days in fresh or estuarine water
- Half-life > 180 days in marine sediment
- Half-life > 120 days in fresh or estuarine sediment
- Half-life > 120 days in soil
- Half-life > 60 days in marine, fresh or estuarine water
- Half-life > 180 days in marine, fresh or estuarine sediment
- Half-life > 180 days in soil
Bioconcentration factor (BCF) > 2000
Bioconcentration factor (BCF) > 2000
- Chronic no-observed effect concentration (NOEC) < 0.01 mg/l
- substance is classified as carcinogenic (category 1 or 2), mutagenic (category 1 or 2), or toxic for reproduction (category 1, 2 or 3)
- there is other evidence of endocrine disrupting effects
1.3. Some notions of Environmental Epidemiology
A complementary, observational approach to the study of scientific evidence of associations between environment and disease is epidemiology. Epidemiology can be defined as “the study of how often diseases occur and why, based on the measurement of disease outcome in a study sample in relation to a population at risk.” (Coggon et al. 2003). Environmental epidemiology refers to the study of patterns and disease and health related to exposures that are exogenous and involuntary. Such exposures generally occur in the air, water, diet, or soil and include physical, chemical and biologic agents. The extent to which environmental epidemiology is considered to include social, political, cultural, and engineering or architectural factors affecting human contact with such agents varies according to authors. In some contexts, the environment can refer to all non-genetic factors, although dietary habits are generally excluded, despite the facts that some deficiency diseases are environmentally determined and nutritional status may also modify the impact of an environmental exposure (Steenland and Savitz 1997; Hertz-Picciotto 1998).
Most of environmental epidemiology is concerned with endemics, in other words acute or chronic disease occurring at relatively low frequency in the general population due partly to a common and often unsuspected exposure, rather than epidemics, or acute outbreaks of disease affecting a limited population shortly after the introduction of an unusual known or unknown agent. Measuring such low level exposure to the general public may be difficult when not impossible, particularly when seeking historical estimates of exposure to predict future disease. Estimating very small changes in the incidence of health effects of low-level common multiple exposure on common diseases with multifactorial etiologies is particularly difficult because often greater variability may be expected for other reasons, and environmental epidemiology has to rely on natural experiments that unlike controlled experiment are subject to confounding to other, often unknown, risk factors. However, it may still be of importance from a public health perspective as small effects in a large population can have large attributable risks if the disease is common (Steenland and Savitz 1997; Coggon et al. 2003).
What is a case?
The definition of a case generally requires a dichotomy, i.e. for a given condition, people can be divided into two discrete classes - the affected and the non-affected. It increasingly appears that diseases exist in a continuum of severity within a population rather than an all or nothing phenomenon. For practical reasons, a cut-off point to divide the diagnostic continuum into ‘cases' and ‘non-cases' is therefore required. This can be done on a statistical, clinical, prognostic or operational basis. On a statistical basis, the ‘norm' is often defined as within two standard deviations of the age-specific mean, thereby arbitrarily fixing the frequency of abnormal values at around 5% in every population. Moreover, it should be noted that what is usual is not necessarily good. A clinical case may be defined by the level of a variable above which symptoms and complications have been found to become more frequent. On a prognostic basis, some clinical findings may carry an adverse prognosis, yet be symptomless. When none of the other approaches is satisfactory, an operational threshold will need to be defined, e.g. based on a threshold for treatment (Coggon et al. 2003).
Incidence, prevalence and mortality
The incidence of a disease is the rate at which new cases occur in a population during a specified period or frequency of incidents.
The prevalence of a disease is the proportion of the population that are cases at a given point in time. This measure is appropriate only in relatively stable conditions and is unsuitable for acute disorders. Even in a chronic disease, the manifestations are often intermittent and a point prevalence will tend to underestimate the frequency of the condition. A better measure when possible is the period prevalence defined as the proportion of a population that are cases at any time within a stated period.
Prevalence = incidence x average duration
In studies of etiology, incidence is the most appropriate measure of disease frequency, as different prevalences result from differences in survival and recovery as well as incidence.
Mortality is the incidence of death from a disease (Coggon et al. 2003).
Interrelation of incidence, prevalence and mortality
Each incident case enters a prevalence pool and remains there until either recovery or death:
A chronic condition will be characterised by both low recovery and death rates, and even a low incidence will produce a high prevalence (Coggon et al. 2003).
Crude and specific rates
A crude incidence, prevalence or mortality is one that relates to results for a population taken as a whole, without subdivisions or refinement. To compare populations or samples, it may be helpful to break down results for the whole population to give rates specific for age and sex (Coggon et al. 2003).
Measures of association
Several measures are commonly used to summarise association between exposure and disease.
Attributable risk is most relevant when making decisions for individuals and corresponds to difference between the disease rate in exposed persons and that in unexposed persons. The population attributable risk is the difference between the rate of disease in a population and the rate that would apply if all of the population were unexposed. It can be used to estimate the potential impact of control measures in a population.
Population attributable risk = attributable risk x prevalence of exposure to risk factor
The attributable proportion is the proportion of disease that would be eliminated in a population if its disease rate were reduced to that of unexposed persons. It is used to compare the potential impact of different public health strategies.
The relative risk is the ratio of the disease rate in exposed persons to that in people who are unexposed.
Attributable risk = rate of disease in unexposed persons x (relative risk - 1)
Relative risk is less relevant to risk management but is nevertheless the measure of association most commonly used because it can be estimated by a wider range of study designs. Additionally, where two risk factors for a disease act in concert, their relative risks have often been observed empirically to come close to multiplying.
The odds ratio is defined as the odds of disease in exposed persons divided by the odds of disease in unexposed persons (Coggon et al. 2003).
Environmental epidemiological studies are observational, not experimental, and compare people who differ in various ways, known and unknown. If such differences happen to determine risk of disease independently of the exposure under investigation, they are said to confound its association with the disease and the extent to which observed association are causal. It may equally give rise to spurious associations or obscure the effects of a true cause (Coggon et al. 2003). A confounding factor can be defined as a variable which is both a risk factor for the disease of interest, even in the absence of exposure (either causal or in association with other causal factors), and is associated with the exposure but not a direct consequence of the exposure (Rushton 2000).
In environmental epidemiology, nutritional status suggests potential confounders and effect modifiers of environment/disease associations. Exposure to environmental agents is also frequently determined by social factors: where one lives, works, socialises, or buys food and some argue that socio-economic context is integral to most environmental epidemiology problems (Hertz-Picciotto 1998).
Standardisation is usually used to adjust for age and sex, although it can be applied to account for other confounders. Other methods include mathematical modelling techniques such as logistic regression and are readily available. They should be used with caution however as the mathematical assumptions in the model may not always reflect the realities of biology (Coggon et al. 2003).
Direct standardisation is suitable only for large studies and entails the comparison of weighted averages of age and sex specific disease rates, the weights being equal to the proportion of people in each age and sex group in a reference population.
In most surveys the indirect method yields more stable risk estimates. Indirect standardisation requires a suitable reference population for which the class specific rates are known for comparison with the rates obtained for the study sample (Coggon et al. 2003).
1.3.2. Measurement error and bias
Bias is a systematic tendency to underestimate or overestimate a parameter of interest because of a deficiency in the design or execution of a study. In epidemiology, bias results in a difference between the estimated association between exposure and disease and the true association. Three general types of bias can be identified: selection bias, information bias, and confounding bias. Information bias arises from errors in measuring exposure or disease and the information is wrong to the extent that the relationship between the two can no longer being correctly estimated. Selection bias occurs when the subjects studied are not representative of the target population about which conclusions are to be drawn. It generally arises because of the way subjects are recruited or the way cases are defined (Bertollini et al. 1996; Coggon et al. 2003).
Errors in exposure assessment or disease diagnosis can be an important source of bias in epidemiological studies, and it is therefore important to assess the quality of measurements. Errors may be differential (different for cases and controls) or nondifferential. Nondifferential errors are more likely to occur than differential errors and have until recently been assumed to tend to diminish risk estimates and dilute exposure-response gradients (Steenland and Savitz 1997). Nondifferential misclassification is related to both the precision and the magnitude of the differences in exposure or diagnosis within the population. If these differences are substantial, even a fairly imprecise measurement would not lead to much misclassification. A systematic investigation of the relative precision of the measurement of the exposure variable should ideally precede any study in environmental epidemiology (Bertollini et al. 1996; Coggon et al. 2003).
The validity of a measurement refers to the agreement between this measure and the truth. It is potentially a more serious problem than a systematic error, because in the latter case the power of a study to detect a relationship between exposure and disease is not compromised. When a technique or test is used to dichotomise subjects, its validity may be analysed by comparison with results from a standard reference test. Such analysis will yield four important statistics; sensitivity, specificity, systematic error and predictive value. It should be noted that both systematic error and predictive value depend on the relative frequency of true positives and true negatives in the study sample (prevalence of the disease or exposure being measured) (Bertollini et al. 1996; Coggon et al. 2003).
When there is no satisfactory standard against which to assess the validity of a measurement technique, then examining the repeatability of measurements within and between observers can offer useful information. Whilst consistent findings do not necessarily imply that a technique is valid, poor repeatability does indicate either poor validity or that the measured parameter varies over time. When measured repeatedly in the same subject, physiological or other variables tend to show a roughly normal distribution around the subject's mean. Misinterpretation can be avoided by repeat examinations to establish an adequate baseline, or by including a control group. Conversely, conditions and timing of an investigation may systematically bias subjects' response and studies should be designed to control for this.
The repeatability of measurements of continuous variables can be summarised by the standard deviation of replicate measurements or by their coefficient of variation. Within-observer variation is considered to be largely random, whilst between-observer variation adds a systematic component due to individual differences in techniques and criteria to the random element. This problem can be circumvented by using a single observer or alternatively, allocating subjects to observers randomly. Subsequent analysis of results by observers should highlight any problem and may permit statistical correction for bias (Coggon et al. 2003).
1.3.3. Exposure assessment
The quality of exposure measurement underpins the validity of an environmental epidemiology study. Assessing exposure on an ever/never basis is often inadequate because the certainty of exposure may be low and a large range of exposure levels with potentially non-homogenous risks is grouped together. Ordinal categories provide the opportunity to assess dose-response relations, whilst where possible, quantified measures also allow researchers to assess comparability across studies and can provide the basis for regulatory decision making. Instruments for exposure assessment include (Hertz-Picciotto 1998);
- interviews, questionnaires, and structured diaries,
- Measurement in the macro-environment either conducted directly or obtained from historical records,
- Concentration in the personal microenvironment
- biomarkers of physiological effect in human tissues or metabolic products
All questionnaires and interviews techniques rely on human knowledge and memory, and hence are subject to error and recall bias. Cases tend to report exposure more accurately than controls and this biases risk estimates upwards and could lead to false positive results. There are techniques that can be applied to detect this bias, such as including individuals with a disease unrelated to the exposure of interest, probing subjects about the understanding of the relationship between the disease and exposure under study, or attempting to corroborate information given by a sample of the cases and controls through records, interviews, or environmental or biological monitoring. Interviews either face-to-face or on the phone may also elicit underreporting of many phenomena subject to the ‘desirability' of the activity being reported. Self-administered questionnaires or diaries can avoid interviewer influences but typically have lower response rates and do not permit the collection of complex information (Bertollini et al. 1996; Hertz-Picciotto 1998).
A distinction has been made between exposure measured in the external environment, at the point of contact between the subject and its environment or in human tissue or sera. Measurements in external media yield an ecologic measure and are useful when group differences outweigh inter-individual differences. Macro-environment measures are also more relevant to the exposure context rather than to individual pollutants. Sometimes, the duration of contact (or potential contact) can be used as a surrogate quantitative measure, the implicit assumption being that duration correlates with cumulative exposure. When external measurements are available, they can be combined with duration and timing of residence and activity-pattern information to assign quantitative exposure estimates for individuals. Moreover, many pollutants are so dispersed in the environment that they can reach the body through a variety of environmental pathways (Bertollini et al. 1996; Hertz-Picciotto 1998).
The realisation that human exposure to pollutants in micro-environments may differ greatly from those in the general environment was a major advance in environmental epidemiology. It has lead to the parallel development of instrumentation suitable for micro-environmental and personal monitoring and sophisticated exposure models. Nonetheless, these estimates of individual absorbed doses still do not account for inter-individual differences due to breathing rate, age, sex, medical conditions, and so on (Bertollini et al. 1996; Hertz-Picciotto 1998).
The pertinent dose at the target tissue depends on toxicokinetics, metabolic rates and pathways that could either produce the active compound or detoxify it, as well as storage and retention times, and elimination. Measuring and modelling of integrated exposure to such substances are difficult at best, and when available, the measurement of biomarkers of internal doses will be the preferred approach. Whilst biomarkers can account for individual differences in pharmacokinetics, they do not however inform us about which environmental sources and pathways are dominating exposure and in some situations could be poor indicators of past exposure Moreover, many pollutants are so dispersed in the environment that they can reach the body through a variety of environmental pathways (Bertollini et al. 1996; Hertz-Picciotto 1998).
To study diseases with long latency periods such as cancer or those resulting from long-term chronic insults, exposures or residences at times in the past are more appropriate. Unfortunately, reconstruction of past exposures is often fraught with problems of recall, incomplete measurements of external media, or inaccurate records that can no longer be validated, and retrospective environmental exposure assessment techniques are still in their infancy (Bertollini et al. 1996; Hertz-Picciotto 1998).
1.3.4. Types of studies
In ecological studies, the unit of observation is the group, a population or community, rather than the individual. The relation between disease rates and exposures in each of a series of populations is examined. Often the information about disease and exposure is abstracted from published statistics such as those published by the World Health Organisation (WHO) on a country by country basis. The populations compared may be defined in various ways (Steenland and Savitz 1997; Coggon et al. 2003);
- Geographically. However care is needed in the interpretation of results due to potential confounding effects and differences in ascertainment of disease or exposure.
- Time trends or time series. Like geographical studies, analysis of secular trends may be biased by differences in the ascertainment of disease. However, validating secular changes is more difficult as it depends on observations made and often scantily recorded many years ago.
- Migrants studies offer a way of discriminating genetic from environmental causes of geographical variation in disease, and may also indicate the age at which an environmental cause exerts its effect. However, the migrants may themselves be unrepresentative of the population they leave, and their health may have been affected by the process of migration.
- By occupation or social class. Statistics on disease incidence and mortality may be readily available for socio-economic or occupational groups. However, occupational data may not include data on those who left this employment whether on health grounds or not, and socio-economic groups may have different access to healthcare.
Longitudinal or cohort studies
In a longitudinal study subjects are identified and then followed over time with continuous or repeated monitoring of risk factors and known or suspected causes of disease and subsequent morbidity or mortality. In the simplest design a sample or cohort of subject exposed to a risk factor is identified along with a sample of unexposed controls. By comparing the incidence rates in the two groups, attributable and relative risks can be estimated. Case-response bias is entirely avoided in cohort studies where exposure is evaluated before diagnosis. Allowance can be made for suspected confounding factors either by matching the controls to the exposed subjects so that they have similar pattern of exposure to the confounder, or by measuring exposure to the confounder in each group and adjusting for any difference in statistical analysis. One of the main limitations of this method is that when it is applied to the study of chronic diseases, a large number of people must be followed up for long periods before sufficient cases accrue to give statistically meaningful results. When feasible, the follow-up could be carried out retrospectively, as long as the selection of exposed people is not influenced by factors related to their subsequent morbidity. It can also be legitimate to use the recorded disease rates in the national or regional population for control purposes, when exposure to the hazard in the general population is negligible (Bertollini et al. 1996; Coggon et al. 2003).
In a case-control study patients who have developed a disease are identified and their past exposure to suspected aetiological factors is compared with that of controls or referents that do not have the disease. This allows the estimation of odds ratio but not of attributable risks. Allowance is made for confounding factors by measuring them and making appropriate adjustments in the analysis. This adjustment may be rendered more efficient by matching cases and controls for exposure to confounders, either on an individual basis or in groups. Unlike a cohort study, however, matching does not on its own eliminate confounding, and statistical adjustment is still required (Coggon et al. 2003).
Selection of cases and controls
In general selecting incident rather than prevalent cases is preferred. The exposure to risk factors and confounders should be representative of the population of interest within the constraints of any matching criteria. It often proves impossible to satisfy both those aims. The exposure of controls selected from the general population is likely to be representative of those at risk of becoming cases, but assessment of their exposure may not be comparable with that of cases due to recall bias, and studies will tend to overestimate risk. The exposure assessment of patients with other diseases can be comparable, however, their exposure may be unrepresentative, and studies will tend to underestimate risk if the risk factor under investigation is involved in other pathologies. It is therefore safer to adopt a range of control diagnoses rather than a single disease group. Interpretation can also be helped by having two sets of controls with different possible sources of bias. Selecting equal numbers of cases and controls generally makes a study most efficient, but the number of cases available can be limited by the rarity of the disease of interest. In this circumstance, statistical confidence can be increased by taking more than one control per case. There is, however, a law of diminishing returns, and it is usually not worth going beyond a ratio of four or five controls to one case (Coggon et al. 2003).
Many case-control studies ascertain exposure from personal recall, using either a self-administered questionnaire or an interview. Exposure can sometimes be established from existing records such General Practice notes. Occasionally, long term biological markers of exposure can be exploited, but they are only useful if not altered by the subsequent disease process (Coggon et al. 2003).
Cross sectional studies
A cross sectional study measures the prevalence of health outcomes or determinants of health, or both, in a population at a point in time or over a short period. The risk measured obtained is disease prevalence rather than incidence. Such information can be used to explore etiology, however, associations must be interpreted with caution. Bias may arise because of selection into or out of the study population, giving rise to effects similar to the healthy worker effect encountered in occupational epidemiology. A cross sectional design may also make it difficult to establish what is cause and what is effect. Because of these difficulties, cross sectional studies of etiology are best suited to non fatal degenerative diseases with no clear point of onset and to the pre-symptomatic phases of more serious disorders (Rushton 2000; Coggon et al. 2003).
1.3.5. Critical appraisal of epidemiological reports
A well designed study should state precisely formulated, written objectives and the null hypothesis to be tested. This should in turn demonstrate the appropriateness of the study design for the hypothesis to be designed. Ideally, a literature search of relevant background publications should be carried out in order to explore the biological plausibility of the hypothesis (Elwood 1998; Rushton 2000; Coggon et al. 2003).
In order to be able to appraise the selection of subjects, each study should first describe the target population the study participants are meant to represent. The selection of study participants affects not only how widely the results can be applied but more importantly their validity. The internal validity of a study relates to how well a difference between the two groups being compared can be attributed to the effects of exposure rather than chance or confounding bias. In contrast, the external validity a study refers to how well the results of the study can be applied to the general population. Whilst both are desirable, design considerations that help increase the internal validity of a study may decrease its external validity. However, the external validity of study is only useful if the internal validity is acceptable. The selection criteria should therefore be appraised by considering the effects of potential selection bias on the hypothesis being tested, and the external and internal validity of the study population. The selection process itself should be effectively random (Elwood 1998; Rushton 2000; Coggon et al. 2003).
The sample size should allow the primary purpose of the study formulated in precise statistical terms to be achieved, and its adequacy should be assessed. If it is of particular interest that certain subgroups are relatively overrepresented, a stratified random sample can be chosen by dividing the study population into strata and then draw a separate random sample from each. Two stage sampling may be adequate when the study population is large and widely scattered but there is some loss of statistical efficiency, especially if only a few units are selected at the first stage (Rushton 2000; Coggon et al. 2003).
To be able to appraise a study, a clear description of how the main variables were measured should be given. The choice of method needs to allow a representative sample of adequate size to be examined in a standardised and sufficiently valid way. Ideally, observers should be allocated to subjects in a random manner to minimise bias due to observer differences. Importantly, methods and observers should allow rigorous standardisation (Rushton 2000; Coggon et al. 2003).
Virtually all epidemiological studies are subject to bias and it is important to allow for the probable impact of biases in drawing conclusions. In a well reported study, this question would already have been addressed by the authors themselves who may even have collected data to help quantify bias (Coggon et al. 2003)
Selection bias, information bias and confounding have all been discussed in some detail in previous sections, but it is worth mentioning the importance of accurately reporting response rates, as selection bias can also result if participants differ from non-participants. The likely bias resulting from incomplete response can be assessed in different ways; subjects who respond with and without a reminder could be compared, or a small random sample can be drawn from the non-responders and particularly vigorous efforts made to collect some of the information that was originally sought and findings then compared with those of the earlier responders, differences based on available information about the study population such as age, sex and residence could give an indication of the possibility of bias, and making extreme assumptions about the non-responders can help to put boundaries on the uncertainty arising from non-response (Elwood 1998).
Even after biases have been taken into account, study samples may be unrepresentative just by chance. An indication of the potential for such chance effects is provided by statistical analysis and hypothesis testing. There are two kinds of errors that one seeks to minimize. A type I error is the mistake of concluding that a phenomenon or association exists when in truth it does not and by convention, the rate of such errors is usually set at 5%. A result is therefore called statistically significant, when there is a less than 5% probability to have observed an association in the experiment when such an association does not actually exist. A type II error, failing to detect an association that actually does exist, is, also by convention, often set at 20%, although this is in fact often determined by practical limitations of sample size (Armitage and Berry 1994). It is important to note that failure to reject the null hypothesis (i.e. no association) does not equate with its acceptance but only provides reasonable confidence that if any association exists, it would be smaller than an effect size determined by the power of the study. The issues surrounding power and effect size should normally be addressed at the design stage of a study, although this is rarely reported (Rushton 2000).
Confounding versus causality
If an association is found and not explained by bias or chance, the possibility of unrecognised residual confounding still remains. Assessment of whether an observed association is causal depends in part on the biological plausibility of the relation. Certain characteristics of the association, such as an exposure-response gradient, may encourage causal interpretation, although in theory it may still arise from confounding. Also important is the magnitude of the association as measured by the relative risk or odds ratio. The evaluation of possible pathogenic mechanisms and the importance attached to exposure-response relations and evidence of latency are also a matter of judgement (Coggon et al. 2003).
1.3.6. Future directions
Some progress has been made in the area of exposure assessment, but more work is needed in integrating biological indicators into exposure assessment, and much remains to be done with respect to timing of exposures as they relate to induction and latency issues.
An obstacle to analysis of multiple exposures is the near impossibility of separating induction periods, dose-response, and interactive effects from one another. These multiple exposures include not only the traditional chemical and physical agents, but should be extended to social factors as potential effect modifiers.
An emerging issue for environmental epidemiologists is that of variation in susceptibility. This concept is not new: it constitutes the element of the ‘host' in an old paradigm of epidemiology that divided causes of disease into environment, host and agent. It has, however, taken on a new dimension with the current technology that permits identification of genes implicated in many diseases. The study of gene-environment interactions as a mean of identifying susceptible subgroups can lead to studies with a higher degree of specificity and precision in estimating effects of exposures.
1.4. Scientific Evidence and the Precautionary Principle
1.4.1. Association between environment and disease
Scientific evidence on associations between exogenous agents and health effects is derived from epidemiological and toxicological studies. As discussed previously, both types of methods have respective advantages and disadvantages and much scientific uncertainty and controversy stems from the relative weights attributed to different types of evidence. Environmental epidemiology requires the estimation of often very small changes in the incidence of common diseases with multifactorial etiologies following low level multiple exposures. For ethical reasons, it is necessarily observational and natural experiments are subject to confounding and to other, often unknown, risk factors (Steenland and Savitz 1997; Coggon et al. 2003). Some progress has been made in the development of specific biomarkers, but this is still hindered by issues surrounding the timing of exposures as they relate to induction and latency. Toxicology on the other hand allows the direct study of the relationship between the quantity of chemical to which an organism is exposed and the nature and degree of consequent harmful effect. Controlled conditions however limit the interpretation of toxicity data, as they generally differ considerably from those prevailing in the natural environment.
Since 1965, evaluations of the association between environment and disease have often been based on the nine ‘Bradford Hill criteria' (Hill 1965).
Results from cohort, cross-sectional or case-control studies of not only environmental but also accidental, occupational, nutritional or pharmacological exposure as well as toxicological studies can inform all the Bradford-Hill tenets of association between environment and disease. Such studies often include some measure of the strength of the association under investigation and its statistical significance. Geographical studies and migrant studies provide some insights into the consistency of observations. Consistency of observations between studies, with chemicals exhibiting similar properties, or between species should also be considered. Whilst specificity provides evidence of specific environment-disease association, the lack of it, or association with multiple endpoints, does not constitute proof against a potential association. Time trend analyses are directly related to the temporality aspect of a putative association, whether trends in environmental release of the chemical agents of interest precedes similar trends in the incidence of disease. This is also particularly relevant in the context of the application of the Precautionary Principle as the observation of intergenerational effects in laboratory animals (Newbold et al. 1998; Newbold et al. 2000) may raise concerns of ‘threats of irreversible damage'. Occasionally, studies are designed to investigate the existence of a biological gradient or dose-response. Plausibility is related to the state of mechanistic knowledge underlying a putative association, while coherence can be related to what is known of the etiology of the disease. Experimental evidence can be derived both from toxicological studies and natural epidemiological experiments following occupational or accidental exposure. Finally, analogy, where an association has been shown for analogous exposure and outcomes, should also be considered.
1.4.2. Precautionary Principle
A common rationale for the Precautionary Principle is that increasing industrialisation and the accompanying pace of technological development and widespread use of an ever increasing number of chemicals exceed the time needed to adequately test those chemicals and collect sufficient data to form a clear consensus among scientists (Burger 2003).
The Precautionary Principle became European Law in 1992 when the Maastricht Treaty modified Article 130r of the treaty establishing the European Economic Community, and in just over a decade has also been included in several international environmental agreements (Marchant 2003). The Precautionary Principle is nonetheless still controversial and lacks of a definitive formulation. This is best illustrated by the important differences between two well-known definitions of the Precautionary Principle; namely, the Rio Declaration produced in 1992 by the United Nations Conference on Environment and Development and the Wingspread Statement formulated by proponents of the PP in 1998 (Marchant 2003). One interpretation of the Precautionary Pr
Cite This Dissertation
To export a reference to this article please select a referencing stye below: