This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
There are around 100,000 different substances registered in the EU market; the chemical industry is Europes third largest manufacturing industry, generating large economic profits and employing millions of people directly or in jobs dependent on it (European Commission, 2001).
On the other hand, in the recent past some chemicals have caused serious damage to human health and the environment. The most infamous example is probably the abundant use of DDT, which (Carson) in her book Silent Spring, published in 1962, claimed causes reproductive disorders in birds and cancer in humans. Other well-known examples are asbestos, which cause lung cancer and mesothelioma or benzene, which leads to leukaemia.
Knowledge about the hazards related to the use of these substance only became available after they were produced in large quantities: they have thus been banned or subjected to other controls only after the damage was done.
EU chemicals policy "must ensure a high level of protection of human health and the environment as enshrined in the Treaty both for the present generation and future generations while also ensuring the efficient functioning of the internal market and the competitiveness of the chemical industry" (European Commission, 2001). This policy is based upon the Precautionary Principle: "Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation" (United Nations).
Specific legislation exists for certain sectors and areas, and others are under development or in the process of being updated. For example industrial chemicals are regulated under the REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) regulation on chemicals and their safe use (EC 1907/2006). The REACH regulation places the burden of proof on industry, which has to collect or generate data on the risks related to the use of both existing and new chemicals. These data are expected to help to close the current information gap on existing chemicals (European Commission, 2001). Other sectorial chemicals legislation includes the Framework Directive on the Sustainable Use of Pesticides (2009/128/EC), the Regulation concerning the Placing on the Market and Use of Biocidal Products (Regulation (EU) No 528/2012; it will repeal and replace Directive 98/8/EC) and the Regulation on Authorization of Plant Protection Products (EC 1107/2009, which has replaced council directives 79/117/EEC and 91/414/EEC).
1.1 Risk assessment of agrochemical products
Specifically, the purpose of the Regulation EC 1107/2009 is to "ensure a high level of protection of both human and animal health and the environment, and at the same time to. safeguard the competitiveness of Community agriculture". For this reason, the regulation should ensure that industry demonstrates that substances or products produced or placed on the market do not have any harmful effect on human or animal health or any unacceptable effects on the environment.
The risk assessment process, in relation to both human health and the environment, is comprised of an assessment of both effects and exposure. According to the Regulation EC 1488/94 (European Commission, 1994), effects assessment comprises the identification of "the adverse effects which a substance has an inherent capacity to cause" (hazard identification), and the assessment of the dose-response relationship. Exposure assessment is instead defined as the estimation of the concentrations to which human populations or environmental compartments are or may be exposed. The risk is then characterised by quantifying the likelihood that adverse effects occur due to actual or predicted exposure to the substance of concern. Characterization of risk is based on the comparison between exposure and toxicological parameters. The concentration to which organisms are exposed can be estimated with predictive models (PEC: Predicted Environmental Concentration), or derived from monitoring data. Concentration of a chemical that does not cause negative effects on ecosystems is usually estimated through extrapolation from laboratory data of acute or chronic exposure, obtained by applying standard methodologies.
Risk assessment procedures commonly follow a tiered testing strategy (Fig. 1.1). Initial risk assessments represent worst-case scenarios: both the ecotoxicological tests and the exposure assessments they are based upon are very simple and very conservative assumptions are made in the assessment factors used. If low risk is indicated at the first tier, usually no further testing is necessary. However, if a chemical fails the initial risk assessment, additional refinements of effects and exposure are often required: the aim of this tiered approach is in fact to focus testing efforts on chemicals that are more likely to cause adverse impacts on human health or the environment. However, there are substantial uncertainties in translating the test responses used in risk assessment to effects of concern in complex ecological systems (Calow and Forbes, 2003).
Two approaches are traditionally followed to extrapolate these estimated measures of toxicity to the ''real world''. The first one consists in taking the lowest concentration that caused an effect in the tests conducted and dividing it by a fixed factor (so-called assessment or safety factors) to obtain a predicted no-effect concentration (PNEC). The alternative is to use the Species Sensitivity Distribution (SSD) method, which requires to calculate a distribution of the sensitivity of species in an ecosystem from laboratory toxicity data on a few representative test species and estimating from this distribution the maximum toxicant concentration that is protective for most (usually 95%) of the species in the ecosystem (Calow et al., 1997).
Fig. 1.1. General procedure for environmental risk assessment (Adapted from TGD part 2, chapter 3, page 174).
1.1.1 Terrestrial risk assessment: current practice
The principles of risk assessment described in the previous section are operationalized into different schemes aimed at characterizing the risk for all the groups of non-target organisms that might be affected by the chemical of concern. For practical purposes, each of these schemes, such as for birds, aquatic organisms, bees and other arthropods, etc., is described in a guidance document, which is applied every time a risk assessment is carried out. As the focus of this thesis is on soil invertebrates, in the following a short description of the pertaining risk assessment scheme currently applied is given.
The guidance document for terrestrial risk assessment (Sante´ des Consommateurs (SANCO), 2002) states that "General adverse effects on the terrestrial environment" include, among others "Effects on soil, above-ground and foliar invertebrates, which represent food for other organisms, and cover essential roles as pollinators, detritivores, saprophages, pest controller, etc". The risk assessment scheme for soil invertebrates is, however, rather minimal. Where contamination of the soil is possible, an acute effects test on earthworms is required, while the requirement for a test on sublethal effects (e.g. reproduction) on earthworms depends on the exposure pattern to the active substance (continued or repeated exposure). This test is only required when specified triggers for persistence of the active substance and the number of applications are exceeded. Where the assessment of chronic risk for earthworms gives a TERlt (long-term toxicity - exposure ratio: ratio between the NOEC (No-Observed Effect Concentration) from the reproduction test and the PEC (Predicted Environmental Concentration)) of less than 5, an earthworm field study is required.
A Collembola reproduction test or a test on gamasid mites is required only "where contamination of soil is possible" and DT90f (time it takes until 90 % of the initial amount or concentration has disappeared, estimated in a field study) "is between 100 and 365 days" and the standard hazard quotient for arthropods (Typhlodromus and Aphidius sp.) "is higher than 2" (Sante´ des Consommateurs (SANCO), 2002). A hazard quotient is defined as the ratio between exposure and toxicity: the higher the figure the greater the risk. The Collembola test is used as a potential waiver (Sante´ des Consommateurs (SANCO), 2002) for the litter-bag test, a study used to assess effects, especially of persistent compounds, on the breakdown of litter material by the soil organism community (Organisation for Economic Co-operation and Development, 2006). According to the guidance document (Sante´ des Consommateurs (SANCO), 2002), "if the litter-bag test is triggered anyway by other criteria (effect on soil micro-organisms > 25 % or TERlt for earthworms < 5) then this test could be omitted".
1.1.2 The problem of extrapolating effects from the individual to the population level
A widespread concern among stakeholders involved in environmental risk assessment (ERA) is that most of the testing procedures used to characterize the risk posed by plant protection products focus on toxic effects at the level of individuals, while the protection goals of the EU regulation are, with the exception of birds and mammals, at the population level. The pesticide legislation states that plant protection products may not" have any long-term repercussions for the abundance and diversity of non-target species" (European Commission, 2011). Therefore, extrapolating from the individual to the population level is often seen as one of the major challenges in ERA (Forbes et al., 2001; Forbes and Calow, 2002).
Consensus is growing among stakeholders over the fact that ecological modelling is a useful tool for ERA. Ecological effect models could help to improve extrapolation of toxic effect from the individual to the population or community level (Forbes et al., 2008). They also represent a means to incorporate ecological complexities that are disregarded in current risk assessment schemes and that could influence estimates of risk under realistic field conditions: for instance, extrapolation of effects between different exposure profiles (Hommen et al., 2010).
There is increasing evidence that the assessment factors method is not consistent in the level of protection ensured and can lead to both over and underprotective risk assessments (Forbes et al., 2008). For instance, Forbes and Calow (2002) determined that the assessment factor intended to extrapolate from acute to chronic toxicity (ACR) is equal to 10. Re-analyzing previously published data (Roex et al., 2000), the authors found that on average, the ACR was 9.1, but the range was between 0.79 and 5,495, which means that in many cases the standard safety factor was either underprotective or overprotective. Furthermore, Hanson and Stark (2012) found that the uncertainty of risk estimates derived from simple matrix models was reduced by more than 88% and by 76% when compared to acute and chronic individual-level data, respectively.
Based on this growing evidence, a number of initiatives have been taken in recent years to discuss the inclusion of ecological models as a refining option for the risk assessment of chemicals. Some of these initiatives include:
2003 Pellston Workshop, which dealt with issues of population level ERA. Outcomes of this workshop highlight how current risk assessments lack genuine estimates of effects of chemicals at the population level, which could lead to bad environmental management decisions (Barnthouse et al., 2008). Both empirical and modelling methods were discussed, as well as how the standard ecological risk assessment framework can be adapted to specifically address populations.
LEMTOX (2007): the workshop brought together stakeholders from academia, regulatory authorities and industry to discuss the role of ecological modelling in ERA of pesticides. Participants agreed on the benefits of using ecological effects model in ERA of pesticides, in terms of exploring the importance of ecological complexities that cannot be tested empirically. They also stressed the need for guidance on Good Modelling Practice, as well as for case studies that explore the added value of ecological models for risk assessment (Forbes et al., 2009).
RUC09 (2009): the workshop focused on addressing the issue of which actions should be taken to implement population modelling into ERA, after pointing out several reasons why population modelling should play an important role in bridging the gap between the protection goals and what is actually measured (Forbes et al., 2011). Unlike the other two initiatives mentioned above, this workshop did not focus on pesticides, but rather on other groups of substances, such as industrial chemicals.
MeMoRisk (2008 - ongoing): a SETAC-Europe Advisory Group on "Mechanistic effect models for ecological risk assessment of chemicals". The advisory group was established as a platform to bring together all stakeholders involved in the regulatory process of ERA (Preuss et al., 2009): the purpose is to take concerted actions towards standardisation of ecological modelling approaches for ERA of chemicals, after recent reviews (Pastorok et al. 2003; Grimm et al. 2009) have strongly emphasized this need. In order to achieve this goal, a number of actions have been promoted by the advisory group (see e.g. CREAM and MODELINK).
CREAM (2009-2013): a European project on mechanistic effect models for ecological risk assessment of chemicals. CREAM is a EU-funded Marie Curie Initial Training Network, outcome of the LEMTOX workshop, where both specific models and general guidance for good modelling practice are being developed (Grimm et al., 2009).
MODELINK (2012 - 2013): a series of two SETAC Europe workshops initialised by the MeMoRisk advisory group, focused on the issue of linking ecotoxicological tests to protection goals. The purpose of the workshops is to provide recommendations on how to use ecological effects models to create this link, as well as to define criteria for deciding when the use of ecological models in ERA schemes and for the choice of model types.
1.2 Brief review of ecological models for ERA
Exposure models are routinely used in risk assessment to predict the fate of the compound of concern in different environmental compartments, and therefore estimate the concentrations to which organisms in nature may be exposed. On the effect side of risk assessment, models are much less utilised. Aside from statistical models to derive, for instance, dose-response curves and determine concentrations that do not cause adverse effects on the exposed individuals, no other models are regularly used. However, as stated in the previous section, the potential of ecological models for improving risk assessment of chemicals, in particular for plant protection products, is increasingly recognized.
Mechanistic effect models are ecological models that represent key processes necessary to link toxic effects at different levels of biological organization, for instance, from sub-individual to individual and population levels. Mechanistic effect models have been successfully used in a number of ecological applications. Some examples are predictions of recovery time (Van et al., 2007), effects of multiple stressors (Ashauer et al., 2007a), interaction of toxicant effects with life history (Stark and Banken, 1999; Stark et al., 2004), density dependence (Forbes et al., 2001; Forbes et al., 2003) and landscape structure (Topping et al., 2003; Thorbek and Topping, 2005; Topping et al., 2005).
Within the broad spectrum of existing ecological models, three major types can be identified in the context of chemical risk assessment: differential and difference equations, matrix models, and individual- or agent-based simulation models.
Differential and difference equations models
Two main categories of ecological models for risk assessment of chemicals lie within this type: Toxicokinetic-toxicodynamic models (TKTD), and Dynamic Energy Budget (DEB) models.
TKTD models simulate "the time-course of processes leading to toxic effects on organisms" (Jager et al., 2011). Toxicokinetics convert an external concentration of a toxicant to an internal concentration over time through the processes of uptake and elimination, while toxicodynamics quantitatively link the internal concentration to the effect at the level of the individual organism over time (Jager et al., 2011). TKTD models have been used successfully to extrapolate toxic effects between different exposure scenarios (Ashauer et al., 2007b), and to explain effects of mixtures over time (Jager et al., 2010).
The dynamic energy budget (DEB) theory for metabolic organisation specifies quantitatively the processes of uptake of food by organisms and its use for the purposes of maintenance, growth, maturation and reproduction (Kooijman, 2000). In the standard DEB model, individuals are considered equal, feed on a single food source and have three life-stages: embryo, juvenile and adult (Kooijman, 2000). The basic DEB theory has been extended to also include effects of chemical compounds (DEBtox). Effects at the individual level are expressed in terms of uptake, elimination and (metabolic) transformation of the compounds (Kooijman et al., 2009), and are linked to the energy budget through toxicokinetics relationships. Effects at the population level are instead evaluated from those at the individual level, by considering populations as a set of interacting individuals (Kooijman et al., 2009): effects of a toxicant on the energy allocation of single individuals are linked to the consequences for the populations.
Demographic models describe individuals in terms of their contribution to recruitment and their survivorship. A convenient and widely used mathematical formulation of age- or stage-structured demographic models is based on linear algebra: the use of matrices in facts provides the advantage of a relatively simple representation of underlying biological phenomena, and an equally simple analysis of the model (Charles et al., 2009).
The complexity of matrix population models varies widely. Such models can incorporate, if necessary, density-dependence and demographic and environmental stochasticity (Caswell, 2001).
They can also incorporate a spatial dimension, which is useful to model spatially fragmented populations, in what are called metapopulation models (Hanski and Gilpin, 1991). Metapopulations are systems of local populations connected by dispersing individuals. Most individuals are born and die within a local population (Hanski and Gilpin, 1991); individual variability within local populations is generally ignored in metapopulation models.
Projection matrix models can incorporate effects of toxicants on all vital rates, allowing to evaluate their influence on population dynamics (Forbes and Calow, 2002), and therefore can be a relevant tool for ecotoxicology and environmental risk assessment.
Individual- and agent-based models
According to the definition given by Grimm ( 1999), individual-based models (IBMs) are "simulation models that treat individuals as unique and discrete entities which have at least one property in addition to age that changes during their life cycle, e.g. weight, rank in a social hierarchy, etc.".
IBMs are particularly well-suited to study systems that are heterogeneous both in space and time, as they allow to model single individuals - which can therefore be characterised by different state variables values. Individuals interact with, and can adapt to, their surrounding environments, and with IBMs is possible to investigate how different conditions affect their life history and behaviours. IBMs allow researchers to study how system level properties emerge from the adaptive behaviour of individuals (Railsback, 2001; Strand et al., 2002) as well as how, on the other hand, the system affects individuals.
Current use of IBMs for ecological risk assessment is limited: in their literature review, Schmolke et al. (2010b) found that only 13% of the models reviewed were IBMs, but their potential is increasingly recognized (Topping et al., 2009).
1.3 Good modelling practice and guidance documents on ecological models
While the potential benefits of using ecological models in risk assessment are clearly recognized, their actual use is not yet established. One of the main reasons for this is that current modelling practices are not transparent and consistent enough (Grimm et al., 2010a). Ecological models are developed for different purposes, which often leads to a great variety of model types and modelling styles (Schmolke et al., 2010a). This in turn generates confusion and distrusts in users that are not familiar with the modelling process, and discourage them from using the models in a legal context.
In their review, Schmolke et al. (2010b) identified the main areas of concern about current modelling practice in unknown sensitivities and uncertainties of model predictions, unclear sources of parameterization and lack of thorough model analysis (Schmolke et al., 2010b).
Therefore, to ensure the suitability of ecological models for the risk assessment of chemicals, "good modelling practice" is needed. The elements to address are nothing new, as they have been already described (e.g., Jakeman et al., 2006): they can be summarised as model development, analysis, evaluation, documentation, and communication. What is really necessary to establish GMP is sufficient involvement of decision makers and stakeholders in the modelling process and some incentives for modellers to follow it (Schmolke et al., 2010a; Thorbek et al., 2009).
Instead of inventing a completely new format, Schmolke et al. (2010a) propose, as a solution, to look at other initiatives which have proven to be well-functioning. The example they suggest is a bottom-up process which has been tried recently for documenting individual- and agent based models: the ODD protocol proposed by Grimm et al. (2006), which provides overview, design concepts and details about the model.
This approach is becoming more and more popular among individual- and agent-based modellers: ODD has been already used in more than 50 publications (Grimm et al., 2010b). Reviewing the uses to date of ODD, the protocol authors observed that using a standard structure to describe models increased the understanding of model descriptions, because readers knew what information about a model was provided where and in what order. Furthermore, it has promoted rigorous model formulation, as modellers started using ODD as a hierarchical checklist for formulating models (Grimm et al., 2010b).
Fig 1.2. Structure of a TRACE document (adapted from Schmolke et al. (2010a)).
Schmolke et al. (2010a) therefore suggest to follow the same kind bottom-up process to establish good modelling practice through a more or less self-organizing process: for this purpose they introduce a standardized documentation of ecological models, the so-called framework for transparent and comprehensive ecological modelling (TRACE). Fig. 1.2 summarizes the TRACE documentation structure: the sequence of the elements corresponds to the sequence of tasks in the iterative modelling cycle (Schmolke et al., 2010a).
1.4 Aim of the thesis
The aim of the present thesis is to, following the principles of Good Modelling Practice, develop, test and use a combination of metapopulation modelling and individual-based modelling to predict the impacts of spatial heterogeneity in soil contaminant levels for the population dynamics of the collembolan, Folsomia candida. In order to develop models that better suit the needs of environmental risk assessment, I also participated in a study that aimed at clarifying how ecological models are perceived by stakeholders involved in ERA of chemicals and what should be done in order to get them accepted in ERA procedures.
In Chapter 2 I contributed to study perspectives of three stakeholder groups on population modelling in ERA of pesticides, by analysing the responses of 43 in-depth, semi-structured interviews that were conducted with stakeholders from regulatory authorities, industry, and academia all over Europe. Participants for this study were recruited using the key informant approach: they were first identified as key stakeholders in the field and then sampled by means of a purposive sampling, where each stakeholder identified as important by others was interviewed and asked to suggest another potential participant for the study.
In Chapter 3 I present the spatially explicit individual-based population model I developed to investigate the effects of heterogeneous soil contamination on F. candida. In the model, individuals are assumed to sense and avoid contaminated habitat with a certain probability that depends on contamination level. Avoidance of contaminated areas thus influences the individuals' movement and feeding, their exposure, and in turn all other biological processes underlying population dynamics. A large part of the chapter is dedicated to describing how the model has been developed, parameterized, tested and evaluated according to the pattern-oriented modelling theory.
The same model has been used in Chapter 4 to explore how the interaction of different patterns of microscale fragmentation caused by the presence of a persistent pollutant in soil, combined with disturbance events, which can be both natural (e.g. drought) and anthropogenic (e.g. pesticide applications), affects the population dynamics of F. candida. To simulate loss and fragmentation of habitat caused by a persistent contaminant, copper sulphate was used. A midpoint displacement algorithm has been implemented in the IBM to generate fractal landscapes with varying degree of spatial autocorrelation and percentage of contaminated habitat. Other submodels introduced in the IBM to conduct this study include procedures for simulating effects on survival and/or reproduction of a drought period and of disturbance events.
In Chapter 5 I have taken the individual-based model described in the previous chapters, and contrasted it with a relatively simpler, more standardized approach, based on the generic metapopulation matrix model RAMAS. With the two models I have then explored consequences of model aggregation in terms of assessing population-level effects for different spatial distributions of a toxic chemical. With this comparison I tried to shed light on the factors that should drive the choice of model type to be used in ERA of chemicals.
Finally, in Chapter 6, I discuss the findings of my thesis, especially simulation results of both models, both in a specific and wider perspective, and how the models and the results can be used to inform risk assessment.