Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Quantitative Techniques In Evaluating Environmental Policy Effectiveness Health Essay

Evalutating the effectiveness of environmental policy is confronted with a multitude of complexities due to the novel, uncertain and contextually dependent dynamics of the environment. Policymakers and practitioners alike are dependent upon reliable information on policy effectiveness in order to evaluate and understand the impacts of policies and their ability to achieve desired outcomes (Pawson and Tilley, 1997). Furthermore, the evaluation of environmental policy is a highly contested and heterogenic process with no standard means of enquiry and investigation with exogenous and endogenous variables and influences (Amaratunga et al., 2002).

These complexities are exemplified within the academic debate with authors such as Ferarro (2009) and Greenstone and Gayer (2009), arguing for greater evaluation through evidence-based means in order to establish solid counterfactuals in order to base evaluations off developed baselines in order to see what would have happened had there been no intervention. Whilst others, such as Mickwitz (2003) and Geysen et al., (2006) regard the greater need for a wider contextualization and consideration of both tangible and intangible dynamics that rely on questionable baselines formed through simplistic interpretations of confounding factors. Furthermore, Sanderson (2002) explores the limitations of both approaches due to the dynamic context that environmental policy finds itself in. What is not contentious is the call to arms for greater emphasis on evaluating effectiveness, and the need for a more holistic understanding of what the real impacts of policy bring and where can improvements be made (Andam et al., 2008).

After establishing and defining the rationale behind what exactly is effectiveness within environmental policy evaluation and its importance, this essay will explore the role that evidence-based quantitative analysis has in contribution to evaluation techniques. Followed by a critical evaluation of the strengths of implementing such methodologies and whether this is really the potential panacea that Ferraro (2009) and Andam et al., (2008) posit. This will be followed by a discussion on the importance of establishing the counterfactual and how complications arise concerning confounding factors and their implications upon the baseline, a fundamental necessity of any quantitative analysis (Ferarro, 2009). This will be supplemented with two examples identifying the strengths and weaknesses of using quantitative methodologies and concluded by strengthening the debate surrounding the incorporation of more holistic methodologies (Mickwitz, 2003) and their policy implications.

The evaluation of effectiveness

Environmental policies are enacted in order to prevent or minimize the deterioration of the many services and components of the natural environment (Lundqvist, 1996: 16). In order to ensure that these policies are achieving their purposes, evaluation plays a critical role in determining the impacts and extent of the policy intervention. In order to provide a descriptive indicator of a policies operational effectiveness, the establishment of a historical baseline enables evaluators to compare the scale of the policies intervention against a counterfactual scenario of no intervention (Ferraro, 2009). But as indicated in the introduction, effectiveness evaluations are littered with complexity and uncertainty (Parson, 1997: 2), complicating the accuracy and formulation of any baselines and comparability (EEA, 2001).

Defining and Evaluating Effectiveness

The literature is saturated with definitions of effectiveness with most alluring to the EEA (2001) report which termed the evaluation of effectiveness as ‘the process of judging whether and how far the observed effects of a policy measure up to its explicit objectives’ (EEA, 2001: 19). Essentially, effectiveness is the degree of correlation of policy outcomes to intentions (Mickwitz, 2003: 426) and how successful this was in solving its designated task (Zaelke et al., 2005). Effectiveness itself is subjective in that it can refer to various types of outcomes such as: Institutional Effectiveness, target group effectiveness, impact effectiveness, societal effectiveness and side effectiveness (Gysen et al., 2006: 99).

The rationale behind the evaluation of effectiveness centers upon the determination and understanding of what are the actual and potential impacts of intervention policies (Palmer, 2010 – class slide). As emphasized by Vedung (1997: 3), evaluation is a retrospective assessment of the merit, worth and value of the outcomes of government interventions, which when analysed, plays an invaluable role in future policy situations. Once the assessment criterion is determined, the evaluation provides a means to utilize methodological tools in ascertaining the performance of a policy (Palmer, 2010). Subsequently, this enables the improvement of policies and wider contribution to the generation of knowledge surrounding public policy action and options (Sanderson, 2002: 3).

Thus, evaluations of effectiveness are about finding value based answers as to how policies are performing in relation to specified targets (Ferarro and Pattanayak, 2006). The most effective means of comparison is by generating a counterfactual perspective which compares the outcomes of the policy with those that would have happened had the policy not been implemented (Cameron, 2006).

Counterfactuals establish the what-ifs; the alternatives to history that would potential have happened had the present policy condition remained unchanged (McCloskey, 1987). Various scholars have argued that the attempt to identify counterfactual outcomes within the social sciences is quixotic (Ferraro, 2009), largely due to the multiplicity of actors and contexts, whilst others have pushed fervently for greater inclusion of counterfactual comparisons within all forms of evaluation (Pfaff and Robalino, 2009). Under both premises, the confounding element of the establishing a counterfactual scenario is the ability of policy evaluators to establish a baseline and interpret potential confounding factors to help understand causal linkages.

Conversely, within the context of environmental policy, establishing causality using counterfactual scenarios is wrought with fundamental difficulties. Due to the temporal and spatial complexities of environmental issues such as the extraneous effects surrounding climate change; understanding, certainty and any notion of causality are highly questionable (Mickwitz, 2003). Additionally, complications arise considering generalization and dissemination as what might work in one area may not be transferable due to different impacts of confounding factors and endogenous variations (Palmer, 2010). There is also a scarcity of longitudinal data and established monitoring due to the immaturity of environmental policy evaluation (Gysen et al., 2006) which limits the empirical availability of datasets and establishment of best practices and knowledge. Nevertheless, as published by the EEA (2005), these limitations are demanding but not insurmountable and the complexity of evaluation is achievable with the correct methodological makeup in evaluating effectiveness of environmental policy (Scott, 2007).

Through establishing a counterfactual scenario, evaluators are able to exemplify the causality of the policies impact and enable the comparability of the effects had there been no policy in place, providing an explicit example of effectiveness. Yet, in order to establish a counterfactual scenario, data and information need to be collected, reported and analysed (EEA, 2001). The ability to establish the counterfactual, baseline, and measurement of confounding factors is surrounded by notable methodological debate (Ferraro, 2009; Cameron, 2006). Quantitative, qualitative and mixed-method approaches have all been implemented in trying to establish counterfactual comparisons with epistemological and methodological debate surrounding the most efficient and effective means of analysis (Amaratunga et al., 2002). This next segment will explore a portion of this debate and critically evaluate the strengths and weaknesses of implementing a quantitative methodology in pursuit of the evaluation of effectiveness.

Quantitative methodologies in evaluating effectiveness

Quantitative methodologies are centered on the exploration, presentation, description and examination of relationships between data points (Saunders et al., 2009: 414). Generally grounded within the positivist notions of objectivity, quantitative methodologies within environmental policy assessment typically focus on the generation of causality through the codification and interpretation of tangible factors leading to their simplification and subsequent operationalisation and measurement (Saunders et al., 2009). The primary benefit of incorporating a strictly quantitative approach to environmental policy evaluation is the ability for the evaluation to abstain from framing within politicized, interpersonal environment, heavily contingent upon bias (Sanderson, 2002). Rather, quantitative methods often lay the foundation for a more systematic evaluation of policies through inferential, scientific procedures (Scott, 2007).

The establishment of counterfactual scenarios is typified by two primary quantitative tools: Randomized experimental policy trials and Quasi-experimental methods. Randomised Experimental policy trials are essentially random participants being exposed to a policy or action (Greenstone and Gayer, 2009). The participants are compared with a statistically identical control group to determine the differences in outcomes between the baseline (controlled) and counterfactual (exposed) samples. Quasi-experimental methods employ the same rationale as the Randomized Experimental trials but rely on nature or policy for example, to act as independent variables (Greenstone and Gayer, 2009). Thus, policies or impacts are able to be ‘matched’ with non-participants in order to gauge the exogenous variation due to the policy intervention (Palmer, 2010).

Example 1 : The effectiveness of the US Endangered Species Act (U.S ESA) – Utilising Matching techniques (Ferraro et al., 2007)

Ferraro et al., (2007) utilizes comparative matching techniques in order to scientific insights into the effectiveness of the U.S ESA to influence the decline of various species. By creating a counterfactual scenario through matching a control group of protected species and a group of related, unprotected species across identical time frames, the researchers were able to create an effective baseline for comparability. Covariates of both groups were matched with species of similar taxonomies, with the counterfactual outcome being the average of the four nearest neighbor matching samples (Ferarro et al., 2007). This ensured similarities between the control groups to counter for endogenous variations such as (level of endangerment, biological characteristics, political influences, scientific knowledge and advocacy).

The findings found that after controlling for selection bias, no statistical significance was found when comparing how protected species fared compared to the counterfactual (Ferraro et al., 2007: 255-256). Interestingly, listed species fared worse if no funding was present during their protection period which provides an interesting insights into potential side-effects of protected species based policies.

Experimental designs as an evaluation tool enable the representation of the counterfactual through enabling the evaluators to install enough variation in instrumental variables to allow an accurate comparison and measurement of policy impacts (Ferraro, 2009: 80). By matching two statistically identical samples, a counterfactual is able to be discerned and inferences drawn on the differences exhibited between control groups (Greenstone and Gayer, 2009). In this case, endangered species are able to be matched with similar species with the external environment held constant to determine whether they would have fared differently had they not been on the endangered list.

Quantitative methodologies have the capacity to influence and strengthen evaluations of effectiveness considerably. Their ability to establish, with evidence-based rigor, the counterfactuals of policies provide evaluators with more inferential indicators of policy effectiveness. The ability to model reality by providing a quantitative representation allows evaluators to emulate past events as baseline figures whilst remaining impartial and objective through the ability of their evidence-based research being replicable (Parson, 1997; Rao and Woodcock, 2003). Additionally, larger samples are able to be employed in order to make wider generalizations that would otherwise not be possible under more interactive methodologies (Rao and Woodcock, 2003).

Unfortunately, quantifying reality is often fraught with complications which exemplify various weaknesses inherent when adopting quantitative methodologies. Matching is often burdened with complications as covariates are often assumption based (Mickwitz, 2003). In the case of endangered species above, assumptions are made as to species being similar without focusing on whether the species had indeed prior intrinsic or lexicographic values (Spash and Hanley, 1995), or whether due to the federal complexities of the USA, certain states exhibited significant variations in policy enforcement or monitoring.

Environmental policy evaluation is compounded with technical, ecological and humanistic difficulties. Any methodology, quantitative included, will encounter significant difficulties in offering a holistic means of evaluation when confronted with high levels of uncertainty and dynamism (Parson, 1997). Essentially, the major weaknesses surrounding quantification results from the complexities inherent in understanding endogenous features such as humans, ecosystems, institutions, and the temporal/spatial realities of the environment. These are fraught with asymmetric information, irrationality and generally high levels of contention and uncertainty.

Weaknesses

Firstly, the notions of confounding effects, groups together many of these dynamics by exploring the wider influences facing policy intervention that could potentially affect or account for the variations exhibited against the counterfactual (Gysen et al., 2006). Essentially, confounding effects are those that may potentially be prevalent in the bigger picture and therefore correlate with both focus groups. Thus, if these effects are either not observed and controlled for, then any inferences drawn from the ‘deemed’ causality of the policies effectiveness would need to account for and understand the relationships between the confounding variable and the outcome. In environmental policy evaluation, confounding effects are particularly hard to observe and quantify without elements of estimation largely due to high levels of uncertainty and complexity.

A major weakness of quantitative evaluation is in its ability to quantify and adequately account for confounding effects of the environment when conducting environmental evaluations. The contemporaneous natures of the biophysical and social spheres with environmental policy interact at all levels with many of the confounding trends being unobservable and often laced with uncertainty and contention (Kleijn and Sutherland, 2003). The issue here is that many environmental effects are not quantifiable let alone understood using today’s knowledge. The full utility of ecosystem services, spatial complexities of climate change, rationality of individual actors, societal belief systems and demographics face large levels of uncertainty. As indicated by Wynne (1992), policy evaluation is plagued with indeterminacies, so little is known about the relevant parameters and the interacting covariates (Mickwitz, 2003: 418), leading to large amounts of ‘guestimation’ and limiting its interpretable abilities (Gysen et al., 2006).

Thus, the validity of any inferred causality in quantitative evaluations of the environment needs to be laced with caveats of uncertainty. With small errors in calculations, especially when regarding climate change, may lead to large variations in the baselines and subsequent redundant calculations (Rabl and van der Zwaan, 2009). Therefore, if underlying complexities aren’t able to be suitably quantified into the equation, how are solid baselines able to be generated to allow comparability of the counterfactual and what other effects do these confounding variables foretell?

Complications on Baselines

The construction of a baseline is fundamentally dependent upon the effective measurement and validity of the confounding variables. Baselines rely on being representative of a counterfactual situation in order to contribute to the evaluation process. If the confounding effects are unable to be quantified and interpreted within the model, then the baseline function as a measurement for pre-intervention conditions and behavior will be of questionable validity. The value of the baseline is that it enables evaluators to compare policy effectiveness against what would have happened without intervention (Pfaff and Robalino, 2009: 194). But because of the complications of establishing representational data due large levels of uncertainty and measurement error (Ferarro, 2009: 79) the potential for a verifiable baseline is largely diminished, inflicting a significant caveat into most forms of environmental evaluation as how can one control and later, interpret, variables that can’t be completely understood?

Additionally, complications arise within the matching process in the form of side-effects, the unintended effects of the intervention (Gysen et al., 2006: 97). Various side-effects are measurable, such as the finding that species protection finding (Example 1). Yet some arnt easily quantifiable nor are they able to be inferred as being of a direct result from the intervention. As exemplified in Mickwitz (2003) a major question arises as to how much change can be inferred from the intervention amongst other external forces. The example given is of water discharges in the form of air emissions decreasing and whether this can be solely attributed to the intervention or wider factors such as consumer demand and technological development (Mickwitz, 2003: 429). Thus, these anticipated and unanticipated side-effects cannot be observed in isolation as indeed by intervention matching (Ferraro and Pattanayak, 2006: 0486) and that within any one policy system, various heterogenic processes interact to complicate the isolation policy effectiveness.

For unanticipated side-effects we need to use several other methods, including: analysis of environmental policy literature on a specific issue; results of previous

evaluations; ex ante estimations of possible costs and benefits; and finally, and

crucial in our opinion, expert interviews with a variety of actors, who have been

close to implementation of the policy.(Gysen,

Society and Equity

Finally, notions of society and equity are often unaccounted for under the employment of quantitative evaluations. In order to establish any sufficient level of causality between intervention and effects, the often unobservable features of societal interaction, institutions and demographic need to be incorporated (Gysen et al., 2006). These are all fundamental to the evaluation of effectiveness (Mickwitz, 2003; EEA, 2001) through questions of relevance of the policy and utility and is often confronted through the evaluation of side-effectiveness and normative inferences. These inferences, associated with identity, perception, cultural norms and beliefs are unable to be quantified or compared across two time points (Rao and Woodcock, 2003). Often researchers use context-specific household surveys or random sampling in order to generate value judgments but these incur the obvious weakness of representation as these variables are difficult enough to quantify let alone operationalise into usable indicators to provide holistic representation(Parson, 1997).

Weaknesses Conclusion –

As depicted above, the fundamental weaknesses confronting quantitative evaluations are centered on the quantification of environmental and humanistic/societal complexities. Policy evaluation is driven by contextual dependencies with institutions, politics, societal incongruence’s and demographic variations all playing a role in the foundation and interpretability of the counterfactual. Quantitative analysis may hold significant descriptive ability in exemplifying outcomes such as those presented in the species example, but possesses very little ability at interpreting and considering wider impacts; the ultimate effects of policies (EEA, 2001).

For any evaluation to verifiably make more than inferential descriptions, greater incorporation of these dynamics will need to be included (Parson, 1997). Unfortunately, due to limited knowledge and understanding of these uncertainties, complete understanding of policy causality may be idealistic, but there are evaluation tools out there which may increase our understanding of causality and enable evaluators to better interpret the effectiveness of policy interventions with greater rigor and validity.

Policy Advice and Conclusions

By critically evaluating the utilization of quantitative methodology in evaluating effectiveness of environmental policies, it is found that quantification offers enlightening prospects in the form of greater conceptualizations of counterfactuals but fails to adequately describe causality within its constructs. As is often the case within environmental policy, uncertainty and heterogeneous factors play a commanding role in complicating any notions of obtaining a holistic appraisal. Fortunately, some of these weaknesses inhibiting the effectiveness of quantitative evaluations are able to be supplemented by alternative methodologies from predominantly the social sciences in an attempt to legitimize and provide greater descriptive qualities, enabling policy makers a greater understanding of the causality and policy effectiveness.

Inclusion of greater interventions and participation

As indicated by EEA (2001), only 12% of policy evaluations had utilized descriptive narratives alongside statistical findings. This is an incredible statistic as it dilutes considerably the inferences able to be made and the process issues and absolute impacts of policy intervention (EEA, 2001; Rao and Woodcock, 2003: 167). Alternatively, this statistic, as intended offers significant scope for the improvement of effectiveness evaluations but supplementing the quantitative requirements with a richer understanding of the policies through qualitative narrative building and wider participation (EEA, 2001). The rationale behind these alternative approaches, set within the phenomenological schools of thought center around attempting to deduce rather than induce causation (Amaratunga et al., 2002).

Two potential options to operationalise these intentions are through Multi-Criteria Analysis (MCA) and Intervention theories. Firstly, MCA’s are no panacea and cannot resolve all the uncertainties faced by evaluators but they are able to provide further descriptive narrative to the formulation of causality (Mickwitz, 2003). MCA comes into its own when faced with complex decisions where judgments can be made by supplementing evidence-based studies with normative interpretations and valuations of both quantifiable and non-quantifiable variables. This enables a greater inclusion of some of the complexities that surround environmental decision making to be factored into counterfactual comparisons in order to evaluate a policy from a more holistic standing.

Alternatively, Intervention theories allow a more participatory form of econometric analysis that allows evaluators to allocate greater resources towards components of the evaluation that face weaknesses in the interpretability of causation (Mickwitz, 2003). The ability to focus on areas of uncertainty throughout the policy’s intervention, evaluators are able to draw on a greater stock of evidence to help expand upon the interrelations between variables. Typically this involves a greater immersion within linkages in the causal path by conducting more indepth research such as drawing on expert opinions (IPCC) and public participation (Mickwitz, 2003: 425-425).

Thus, both of these methods draw on the notions of triangulation (Scriven, 1991) by expanding the evaluation process to involve wider sources of knowledge and insight whilst incorporating different methodological techniques to provide potentially alternative objectivity and explanations. The complexities facing evaluations of environmental policy are typified by uncertainty and interpretability, thus the greater use of evaluation tools and alternative insights provide evaluators with the ability to offer more descriptive power and thus, greater validity to their appraisals.

Conclusion

Recap-----

Findings

It is evident that quantitative evaluation methodologies provide significant scope for enabling a greater understanding of policy effectiveness but are hindered by variability and uncertainties that simply cannot be accounted for with quantification. Therefore, future research should look at the continued combination and dissemination of descriptive evaluation tools in accordance with the EEA’s (2001) plea and encourage or even enforce more multi-dimensional evaluations of policy interventions. Only then will significant steps be made in establishing causality and reducing uncertainties surrounding environmental policies. Thus, no ideal solutions are possible on the immediate horizon, but compromises will do for now.

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays