Cost benefit analysis


Cost benefit analysis is a process of identifying, measuring and comparing the social benefits and costs of an investment project or program. A program is a series of projects undertaken over a period of time with a particular objective in view. The project or projects in question may be public projects –undertaken by the public sector – or private projects. Both types of projects need to be appraised to determine whether they represent an efficient use of resources. Projects that represent an efficient use of resources from a private viewpoint may involve costs and benefits to a wider range of individuals than their private owners. For example, a private project may pay taxes, provide employment for the otherwise unemployed, and generate pollution. These effects are termed social benefits and costs to distinguish them from the purely private projects from social viewpoint as well as to appraise public projects.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

It should be noted that the technique of social benefit- cost analysis can also be used to analyse the effects of changes in public policies such as the tax/subsidy or regulatory regimes. However very broad range of issues can arise in this kind of analysis and, for ease of explosion, we adopt the narrower perspective of projects analysis in this study.

Public projects are often thought of in terms of the provision of physical capital in the form of infrastructure such as bridges, highways and dams. However there are other less obvious types of physical projects that augment environmental capital stocks and involve activities such as land reclamation, pollution control, fishery management and provision of parks. Other types of projects are those that involve investment in forms of human capital, such as health, education and skills and social capital through drug-use and crime prevention, and the reduction of unemployment. There are few, if any, activities of government that are not amenable to appraisal and evaluation by means of social benefit-cost analysis.

Investment involves diverting scarce resources- land, labour and capital- from the production of goods for current consumption to the production of capital goods which will contributes to increasing the flow of consumption goods available in the future. An investment project is a particular allocation of scarce resources in the present which will result in a flow of output in the future: for example land, labour and capital could be allocated to the construction of a dam which will result in increased electricity of output in the future (in reality there are likely to be additional output such as irrigation water, recreational opportunities and flood control but we will assume these away for the purposes of the example). The cost of the project is measured as an opportunity cost- the value of the goods and services which would have been produced by the land, labour and capital inputs had they not been used to construct the dam. The benefit of the project is measured as the value of the extra electricity produced by the dam.

The role of the benefit-cost analyst is to provide information to the decision-maker – the official who will appraise or evaluate the project. We use the word “appraise” in a prospective sense, referring to the process of actually designed whether resources are to be allocated to the project or not. We use the “evaluate” in a retrospective sense, referring to the process of reviewing the performance of a project or a programme. Since social benefit-cost analysis is mainly concern with projects undertaken by the public sector the decision–maker will usually be a senior public servant acting under the direction of a Minister. It is important to understand that benefit-cost analysis is intended to inform the existing decision-making process, not to supplant it. The role of the analyst is to supply relevant information about the level and distribution of the benefits and costs to the decision-maker, and potentially to contribute to informed public opinion and debate. The decision-maker will take the results of the analysis, together with other information, into account in coming to a decision. The analyst's role is to provide and objective appraisal or evaluation, and not to adopt an advocacy position either for or against the project.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

An investment project makes a difference and the role of benefit-cost analysis is to measure that difference. Two as yet hypothetical states of the world are to be compared - the world with the project and the world without the project. The decision-maker can be thought of as standing at node in a decision tree as illustrated in Figure 1.1. There are two alternatives: undertake the project or don't undertake the project (in reality there many options, including a number of variants of the project in question, but for the purpose of the example we will assume that there are only two).

The world without the project is not the same as the world before the project; for example, in the absence of a road-building project traffic flows may continue to grow and delays to lengthen, so that the total cost of travel time without the project exceeds the cost before the project. The time saving attributable to the project is the difference between travel time with and without the project, which is larger than the difference between travel time before and after the project.

Economy Efficiency And Effectiveness

Economy may be defined as the terms under which authority acquires human and material resources. An economical operation acquires those resources in the appropriate quality and quantity at the lowest cost.

Efficiency may be defined as the relationship between goods and services produced and the resources used to produce them. An efficient operation produces the maximum for a given set of resource outputs; or, it has minimum inputs for any given quantity and quality of service provided.

Effectiveness is the most difficult of the three concepts to measure, not only because of the problems involved in assessing the achievement of the goals of welfare delivery agencies, but also because the measurement of effectiveness invariably involves political issues (Radford 1991: 929). There have also been criticisms that too many conservative government inspired managerial initiatives since 1979 used effectiveness and efficiency as substitutes for economy, the three concepts in practise often being reduced to economy or cost cutting (Greenwood and Wilson: 12-13).

Evaluation Of Training And Development

Several writers resist stating a purpose for evaluation, adopting the view that the purpose depends on various factors (Thompson, 1978; Brinkerhoff, 1981; Salinger and Deming, 1982). Evaluation, according to Salinger and Deming (1982,20) is the response to the question "What do you want to know about training?" Nor should its purpose "self-serving" but designed in terms of someone doing something with the information (Brinkerhoff, 1981, 67).

Bramley and Newby (1984a) identify five main purposes of evaluation: feedback (linking learning outcomes to objectives, and providing a form of quality control), control (using evaluation to make links from training to organisational activities, and to consider cost effectiveness), research (determining relationships between learning, training, transfer to the job), intervention (in which the results of the evaluation influence the context in which it is occurring), and power games (manipulating evaluative data for organisational politics).

Burgoyne and Cooper (1975) and Snyder et al. (1980) discuss evaluation in terms of feedback and the resultant issue of control. A decision must be made about how and to whom evaluation feedback will be given. Evaluators are usually conversant with the purpose of the evaluation once they commence it, but this may be because they have a generalised view that the purpose of evaluation is to produce a certain set of data, or because they have determined what purpose the client wishes the evaluation to have. It is possible however that an evaluator may have no specific purpose. The identification of unanticipated side effects of the program may be an important evaluative purpose. Lange (1974) suggests it is often difficult to determine the purpose - there may be several; furthermore, the evaluator may not discover the real purpose until the end of the exercise.

Models And Techniques

As with definitions and purposes, there is great variety in the evaluation models and techniques proposed. In some cases it is very difficult to separate the techniques from the 'model' - the writers are actually presenting an evaluation approach using a specific technique rather than a model.

Nearly 50% of the literature discusses case study or anecdotal material in which models and techniques are referred to, but seldom provides detail useful to the reader wishing to implement these. More than 80% of these articles lacked evidence of background research and many failed to offer practical applications.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

If the literature reviewed is a reliable guide, Kirkpatrick's four stage model of evaluation is the one most widely known and used by trainees. Perhaps this is because it is one of the few training-specific models, and is also easily understood. Nearly one third of the journal articles from all three countries made reference to his model, and of the eleven writers actually presenting a specific model of evaluation (as opposed to the development of an evaluation strategy), five have drawn inspiration from Kirkpatrick's work.

The objectives-driven model also surfaces in various forms in the literature, although Tyler's name with which it is associated is rarely mentioned. This model of evaluation focuses on the extent to which training objectives have been met, and the common method of evaluating transfer of learning is by control groups. The desirability of setting measurable objectives, following a cost-effective plan to meet them, and evaluating to determine the degree to which they are met is a recurring theme in the HRD literature (Elkins, 1977; Freeman, 1978; Keenan, 1983; Del Gaizo, 1984; Larson, 1985).

The literature is cluttered with suggested evaluation techniques ranging from simple questionnaires to complex statistical procedures. Often the one technique is presented under several different names, such as pre & post testing which is variously referred to as pre-then-post testing (Mezoff, 1981), the 3-Test Approach (Rae, 1983), and Time Series Analysis (Bakken and Bernstein,1982). Similarly, Protocol Analysis (Mmobuosi, 1985) and the journal method of Caliguri (1984) are basically one and the same technique.

Much of the literature reviewed could be regarded as presenting "general techniques" and as such much of it is superficial. For example, in addressing the problem of evaluating the degree to which participants after training use the skills learned back on the job, one reads such statements as "Be sure the instrument [you design] is reliable and delivers consistent results", and "Measure only what is actually taught and measure all the skills taught". Sadly, such broad brush advice is all too common. Even some of the case study articles gave no insight into their methodology or techniques.

There are three categories of evaluation techniques covered in the literature. The first is the interview. This can be of the trainer, trainee or trainee's superior. It may be pre, during or post training; structured or unstructured. Questionnaires can be used to evaluate at several levels, either qualitatively or quantitatively; as self assessment or objective measures. Finally, there are quantitative and statistical measures including control groups, experimental and quasi-expePrimental designs. These are far less likely to be used.

There appears to be no mid-point between reasonably subjective measures and scientifically controlled measurement available to the HRD evaluator. Evaluation linked to performance indicators is not common and as Goldstein observes, "The field is in danger of being swamped by questionnaire type items. The failure to develop methodologies for systematic observation of behaviour is a serious fault" (1980, 240).

There is an emerging awareness of the need to perform longitudinal evaluation to evaluate more than the immediate reactions or learning of trainees, although some of the suggested techniques lack objectivity, and data are therefore open to whatever interpretations best suit.


The literature reviewed for the 17 year period to 1986 suggests that there is a widespread under-evaluation of training programs, and that what is being done is of uneven quality.

It is not difficult to sympathise with the practitioners who agree with the principle of evaluation but express concern about the practice of it. The literature contains a confusing array of concepts, terminologies, techniques and models. For instance, more than 80% of the literature reviewed makes no attempt to define or clarify the term evaluation, yet one in four writers propose evaluation models of some description. It was particularly surprising to find this failure to define evaluation in some otherwise quite well researched articles.

Associated with the issue of definition is that of determining the purpose. Many imply their definition when they outline the perceived purpose. If one is unclear as to purpose, the choice of appropriate strategy and methodology will be affected. Nearly one quarter of the articles neither present nor imply any specific purpose for evaluating training. A similar proportion display a superficial understanding of the more complex issues involved, and a paucity of realistic applications.

Woodington (1980) encapsulates these views by highlighting five distinct impressions which can be gained from an overview of training evaluation.

Firstly, many practitioners do not perceive the training program as an instructional system, nor do they fully understand what constitutes the evaluation of training. The nature and type of organisation exerts a subtle influence (possibly control?) over the scope and methods of evaluation, and the conduct of evaluation is also dependent on whether internal or external evaluators are used. Finally, he draws attention to the lack of personnel trained in evaluation methodology. The obvious constraint determining the type of evaluation chosen is the availability of resources. This includes time, money, and personnel, as well as the evaluator's own expertise. Possibly the latter is the major constraint. Lange (1974,23) expresses similar concerns, stating, "Too many bad evaluations are being presented ... evaluation is a good concept based on solid theoretical thinking. But its practice is not well developed".

The definition and purpose of evaluation enable the evaluator to determine what strategy to adopt. Practitioners need to see evaluation in a broader context than merely a set of techniques to be applied. In a systems approach, evaluation is an integral part of the HRD function which in turn is part of the whole organisational process. This integrated approach contrasts with the more popular view of evaluation as something that is "performed" at certain points and on certain groups; the integrated approach means it is difficult to separate evaluation from needs assessment, course design, course presentation, and transfer of training.

It is not within the scope of this article to expand on this further, but the belief that training programs should be continually evaluated from the earliest design phase in order to modify and improve the product goes unrecognised by many trainers. This would account for the popularity of Kirkpatrick's model, which tends to promote retrospective evaluation rather than formative or summative.

Evaluation techniques are not well written up in the literature, and the use of experimental control groups, statistical analysis and similar methods may be concepts which exist only in academic journals according to Bramley and Newby (1984b,18). The need for measurement of training effectiveness is often referred to, but there are few good examples of rigorous evaluation of training programs. One conclusion must be that practitioners do not know how to do much more than basic assessment. Much of what is labelled evaluation is basically an assessment of the actual training activity (Zenger and Hargis, 1982; Morris, 1984). The choice of techniques will depend on some combination of methodological and pragmatic questions, and there is a need to settle for 'sensible' evaluation - one cannot measure the impact of management training on the whole organisation but must make some compromises. Questionnaires, surveys and structured interviews should be carefully designed and field tested to ensure that worthwhile information is received.

The literature review confirms the belief of Morris (1984) that evaluation is regarded by most practitioners as desirable in principle, difficult in practice. It also highlights the lack of well written and documented articles for practitioners to learn from.