Evaluation Methods in Empirical Economics
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Tue, 07 Aug 2018
Evaluation methods in empirical economics fall into five broad categories; each provides an alternative approach for constructing the counterfactual. Alternative evaluation methods depend on several criteria like; a) nature of the program i.e. whether the program/policy is local or national, small scale or global, b) nature of questions to be answered, and c) nature of data available (Blundell and Dias 2000). Heckman et.al (1997, 1998a,b) showed that data quality is also a crucial ingredient for the determination of the appropriate estimation strategy.
Both in the case of large scale and small scale impact evaluation work, randomized assignment is often used. It is a fair allocation rule. Because the program manager ensures that, every eligible person or unit has the same chance of receiving the program. When the observation will be very large, any characteristic (either observed or unobserved) will flow through treatment and comparison group; if they are created through randomised assignment.
An evaluation is internally valid; if it uses a valid comparison group; and when the impact estimated in the evaluation sample can be generalised to the total population, that evaluation will be called externally valid. Randomized assignment is used, when there exists excess demand for a program and when a program needs to be phased until it covers the entire population. (Gertler et.al 2011).
In the Progresa program, where cash was transferred to poor mothers in rural Mexico for their children’s enrolment in school; Schultz (2004) based on the randomized assignment found that, educational grants to rural poor mothers had an effect on the enrolment.. In the paper “Expanding credit access: using Randomized Decisions to estimate the Impacts’, (Karlan and Zinman 2008), the authors concluded that, marginal loans produced significant net benefits for borrowers over a wide range of outcome by using randomized experiment.
In assessing the effect of performance based payment on the use and quality of maternal and child health services provided by health-cares in Rwanda, Basinga et.al (2011) concluded that, financial performance incentives (i.e. payment for performance) could improve the use and quality of maternal and child health services.
Vermeersch by using randomised technique examined that, school participation was 30% greater in twenty five Kenyan schools where a free breakfast was introduced than in twenty five comparison schools( Vermeersch 2002).
Kremer et.al (2002) evaluated a program where a nongovernmental organisation provided uniforms, textbooks, and classroom construction to seven schools that were randomly chosen from fourteen poorly performing schools in Kenya and found that, dropout rates was considerably low in treatment schools. In evaluating a twice-yearly school based mass treatment program in Kenya, where inexpensive de-worming drugs were provided ( as intestinal worm among children was highly prevalent) in seventy five schools which were randomly selected, Miguel and Kremer (2003a) found that, the absenteeism rate in treatment schools lowered down by 25%.
From randomized evaluation , it was found that, provision of textbooks in schools in Kenya increased the test scores by about 0.2 standard deviation, but there was an increase in test score of those students who had scored well (top 20-40%) in the pre-test exam before the intervention of the program. They also found that, text book provision didn’t affect the test scores of bottom 60% students (Glewwe et.al 2002). Seva mandir, an Indian NGO, runs in Indian villages, introduced a program, where a second teacher (preferably woman) was randomly assigned to twenty one out of forty two schools in non formal education centres. Banerjeee et.al(2002) evaluated this program by monitoring the attendance of both teachers and children and found that, the number of closing days reduced after the introduction of the program (i.e. 44% in one- teacher and 39% in two- teacher). They also found the participation of girls also increased.
Banerjee et.al (2003) evaluated the impact of a remedial education program introduced by Pratham, an Indian NGO, where young women were hired from the communities and were providing remedial education to children in Government school. On an average, after two years of the program, they found that, the test scores of the students increased by 0.39 standard deviation. Moreover, the bottom level children gained the largest out of this program. They also concluded that, hiring remedial education teachers from community is 10 times more cost effective than hiring new teachers.
Glewwe et.al (2003) evaluated a program where parent school committees were providing gifts to teachers whose students were performing well and concluded that, the test scores f the students who were a part of the program initially increased but later on fell back to the level of comparison group at the end of the program.
In the evaluation of a Colombian program for extending the coverage of secondary school ( Programa de Amlplication de cobertura de la Education Secundaris), where vouchers for private schools were allocated by lottery due to the limitation of program’s budget, Angrist et.al (2002) took the advantage of randomly assigned treatment and found that, lottery winners were 15-20% more likely to attend private schools, 10% more likely to complete the 8th grade and scored on an average 0.2 standard deviation higher on standardised tests.
Randmised promotion method is similar to that of the randomised offerings. Under this method, we randomly select the units to whom we promote the treatment; instead of randomly selecting units to whom we offer the treatment. By doing so, we leave the program open for every unit. There are three types of units under randomised promotion method:1) Always- always they want to enrol in the program, 2) Enroll- If- Promoted- they will enrol only when additional promotion is provided, 3)Never- they will never enrol in the program; whether the promotion is offered or not ( Gertler 2011). Both Gertler et.al (2008) and Newman et.al (2002) used the randomised promotion technique as an impact evaluation tool.
In impact evaluation, Regression discontinuity design method will e used for a program that have a continuous eligibility index with a clearly defined cut-off score to determine the eligibility of the participants (Gertler 2011). In assessing the effect of social assistance program, which was funded through the Canadian Assistance plan, in Quebec, Canadain in labor market outcome, Lemieux and Milligan (2005) by using regression discontinuity design method by limiting the sample to men found that, access to greater social assistance benefits reduced employment by about 4.5 percent for men.
To study the impact of school fee reduction program on school enrolment in the city of Bogota, Colombia, Barrera-Osario et.al (2007) used regression discontinuity design method and found a positive impact on school enrolment rates. Regression discontinuity design method was also used t evaluate a social safety net initiative in Jamaica. In 2001, the Government of Jamaica initiated a program namely, Program of Advancement through Health and Education (PATH), where grants was given to children in eligible poor households on the condition of regular attendance and health visits. Levy and Ohis (2007) by using regression discontinuity design found that, PATH program increased school attendance for children ages 6 to 17 by an average of 0.5 days per month. Likewise, Matinez(2004) and Filmer and Schady (2009) also used regression discontinuity design method to study the impact of a program.
Propensity score matching method pairs each program participants with a single nonparticipant, where pairing is done on the basis of the degree of similarity in the estimated probability of participating in the program (Smith and Todd 2001). In measuring the impact of training program on trainee’s earning, Lalonde (1986) by comparing both experimental and non experimental results concluded that, non experimental methods are subjected to specification errors and also suggested to be aware while implementing these methods.
Dehejia and Wahba (1998,1999) by using NSW data concluded that, matching approaches are generally more reliable thangeneral econometric estimators as they found that, matching estimators were able to produce a result which was a replicate of experimental NSW result. Smith and Todd (2005a) argued that, PSM does not solve the selection problem which was studied by Lalonde.
 Gertler et.al (2008) evaluated the impact of a maternal and child health insurance program in Argentina.
 Newman et.al (2002) evaluated a program where social investment fund was provided for small scale investments in education, health and water infrastructure in Bolivia.
 Matinez(2004) studied the effect of old age pension program on consumption.
 Filmer and Schady(2009) studied the impact of scholarship in school enrolment and testscores of poor students in Colombia.
Cite This Work
To export a reference to this article please select a referencing stye below: