Outcomes And Impact Of Social Initiatives Commerce Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

"Success depends on knowing what works." Bill Gates, Co-Chair, Bill & Melinda Gates Foundation

In previous chapters we have looked at how organizations can assess the needs of a target population, how the theory of change of an organization can be articulated and assessed, and how the processes that an organization implements can be evaluated. In this chapter we will get at the heart of social mission organization evaluation: determining if the organization (or program/initiative an organization is implementing) is reaching the outcomes and impact it has set out to achieve.

Outcome and Impact Evaluations and their Importance for Social Mission Organizations

Outcome and impact evaluations measure what their names imply: the results of an initiative. In this course, we have seen the importance of identifying stakeholder needs in a Needs Assessment, ensuring that the logic of an initiative makes sense through a Program Theory Assessment, and monitoring whether activities are being conducted with a high level of quality, and as planned, through a Process Evaluation. Now we turn to the central question of an intervention: what are the results, or effects, of a social initiative or program?

Assessing program outcomes and impact is crucial for social mission organizations to determine if they are serving the target population as planned. In addition, more than ever, key stakeholders and social investors both inside and outside the organization are requiring increased accountability and information about results. While the performance of companies is measured by the financial bottom line (although increasingly by a triple bottom line), social mission organizations must consider their "social bottom line," otherwise known as their social impact: The Small Enterprise Education and Promotion Network (SEEP) - a learning exchange platform for microfinance organizations - proposes in its microfinance impact assessment tool, when referring to assessment, that, "In short, practitioners want to prove the value of their intervention, and they want to improve the performance of their program." Data from outcome and impact assessments provide key inputs to do just that.

The reality is, however, that when social mission organizations have budgets for monitoring and evaluation, the focus is typically on process monitoring (are we implementing our programs like we said we would?), particularly in regards to following the budget, managing staff, and tracking the activities conducted, rather than the outcomes and impact of the initiative. Process monitoring and evaluation is unquestionably important, ensuring that the program is run with a high degree of quality. But if a social mission organization never critically evaluates the positive or negative effects that an initiative has on a population, it cannot be sure that the initiative is fulfilling its mission. Indeed, it cannot be sure that it is not inadvertently causing harm to the communities where it operates.

Rowena Young, former director of the Skoll Centre for Social Entrepreneurship, in her article "For What It Is Worth: Social Value and the Future of Social Entrepreneurship" reflects on the value that social entrepreneurs attempt to create: "All entrepreneurs try to create value. Whatever their business, value is their stock in trade. For entrepreneurs in the commercial sector, there are well-established methods for determining how much value they make…By contrast, social entrepreneurs are said to create value which is social. Whatever it is, it benefits people whose urgent and reasonable needs are not being met by other means." Evaluating the outcomes and impact of a social intervention helps the organizationto determine if, indeed, it is creating social value.

Students who have taken the course Introduction to Social Entrepreneurship will remember that Sawhill and Williamson emphasize that an organization should not only understand how the initiative affects its target population, but it also needs to align its mission, goals, and performance metrics - "link your metrics to your mission". They add that the "very act of aligning the mission, goals, and performance metrics of an organization can change it profoundly". This includes:

a narrowly defined mission

the development of microgoals that, if achieved, would imply success on a grander scale

investment in research to determine whether program activities actually promote the positive outcomes set out in the mission.

Both outcome assessments and impact assessments should be designed keeping in mind the organization's social mission, since they help to determine the results of the program activities. Impact assessments, however, will include a broader scope and look at the all of the program impacts: both the intended and unintended results of the initiative on the target population.

Social mission organizations may shy away from outcomes or impact assessments because, conducted rigorously, they require a high level of expertise and the use of statistics. (especially in the case of impact evaluations). However, according to the International Initiative for Impact Evaluations (3IE), a movement to promote impact evaluations in international development, rigorous Impact Evaluations are defined as: "analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context." As we will explore later in this chapter, the level of academic and statistical rigor needed to conduct an impact assessment depends largely on the intended use of the results. If social mission organizations want to assess broadly whether they are moving toward their desired impact, there are methods of conducting basic outcomes and impact assessments that are appropriate to the organization's budget, time frame, context, and evaluation question. If funds for social mission organizations are scarce, then it is even more necessary to know whether those funds are producing the desired impact.

Outcomes vs. Impact - What is the Difference?

You will recall from previous chapters that the intended outcomes and impacts of an initiative are outlined in the Program Theory or Theory of Change. In the chapter on Program Theory Assessment we discussed how Program Theory can be outlined in terms of the process by which goals are achieved: the actions that are taken, (activities), the anticipated results of those actions (outputs), the anticipated short-term results (outcomes), and the expected long-term results of those short term outputs (impact and goals). Outcome Evaluations and Impact Evaluations both measure the actual effects of an initiative on the target population.

Outcomes assessments and impact assessment are different from each other, however, in important ways. For the sake of simplicity, you will remember that in discussing the Logical Framework and Theory of Change, we distinguished outcomes from impact in terms of long and short-term results. This is still true; however, there is another, even more important, distinction between the two: Outcomes are the observable changes in the characteristics of a program's target population, in line with the program's intended effects. They measure specific changes in attitudes, behaviors, knowledge, skills, status, or level of functioning that result from an organization's activities. Impact takes outcomes an important step further, and measures the difference between the outcome of an organization's activities, and what would have occurred if the organization's activities had not happened. Rossi, as well as other experts in evaluation, stresses that the measured effects in impact assessment must be due to the program itself, rather than other external influences. Impact is essentially the outcome changes over a program's life that can be attributed directly to the program. (As we will discuss further on, it is not easy to prove this causality). In other words, when doing an impact assessment, the evaluator must take into consideration all of the variables that might affect the results seen in the target population, not just the possible effects of the program being evaluated).

Outcome evaluations answer the basic question: Are the program activities leading to the desired results? Am I achieving what I set out to do? You can think of outcomes in terms of the results of program activities. As an example, let's take another look at the Ikatú initiative. The program activities include weekly workshops with the women's committees, assigning each participant a mentor partner, providing access to savings and credit, and setting up a business competition for the committees to compete among one another. The immediate outputs of those activities (i.e. meetings held, personal goals set, credit extended) prove that those activities took place, but they do not show any results of the activities. Outcomes show the more immediate results of the outputs and activities. In keeping with the example, an outcome of the weekly workshops might be that the women achieve one of their goals defined during the workshop, or that they adopt behaviors taught during the workshop (i.e. opening a savings account). Outcome evaluations are important for showing whether the desired results are taking place but doesn't necessarily prove causation - that the initiative caused the results.)

Impacts refer to outcomes or changes that can be directly attributed to programs.

Impact evaluations answer a more complicated question that seems simple at the offset: What difference did the program make? An impact evaluation "analyzes and documents the extent to which changes in the well-being of the target population can be attributed to a particular program or policy." While outcome evaluations can show that changes occurred following the implementation of an initiative, and can compare those changes to the intended program effects, they cannot prove that the observed effects occurred as a direct result of the program. For example, what would have happened if the program were not implemented? This is where assessing program impact becomes complex - but important.. In order to show what would have occurred, had no intervention taken place, we must consider changes in the target population in absence of the program. This is called the "counterfactual", and is the fundamental component of any impact evaluation. We will discuss more about the concept of the counterfactual later on in this chapter.

Both outcomes and impact assessements are particularly useful for social entrepreneurial organizations because they take into account organizational performance and program returns in a way that helps an organization evaluate program scalability.

Confusion Caused by the Term "Impact"

Determining impact may seem intuitive to many social entrepreneurs. Social entrepreneurs know their clients well and often work very closely with them, so they understand intuitively the changes that clients have made in their lives, thanks to the the implementation of the social initiative in question. But this impact is generally not quantified (and sometimes it is even not quantifiable). Case studies and outcomes alone cannot prove impact in a scientific way, because they cannot prove causation - there are too many other variables and factors at play that may be influencing changes in the reality of the target population. Can a social entrepreneur really be certain that the changes he or she observes are not due to factors outside the scope of the initiative? This is a critical question if the social entrepreneur is planning to take the initiative to scale, because if external factors caused the desired observed outcomes, the initiative may not be a success in other locations.

Unfortunately, the term "impact" is often used very broadly and many social mission organizations and social investors misuse the term, causing confusion. The term "social impact" is increasingly being used for example, in the fields of microfinance, social investment, and international development, when referring to social outcomes. Students who took the course Introduction to Social Entrepreneurship will recall from chapter 3.1 a discussion on social performance measures such as Social Return on Investment (SROI), Blended Value, ACCION SOCIAL, and the Social Performance Indicators. You may recall that some of these measurement tools are incorrectly named "social impact measures" by those who designed them, instead of being referred to as social outcomes measurement tools.

Let's take a closer look, for example, at one of the most widely known and popular "social impact" tools, Social Return on Investment, or SROI. The Roberts Enterprise Development Fund (REDF) a San Francisco-based venture organization that invests in nonprofit-run businesses, or 'social enterprises, defines SROI as: "tracking social outcomes of ordinarily difficult to monetize measures of social value, such as increases in self-esteem and social support systems, or improvements in housing stability." SROI attempts to quantify the added socio-economic value social mission organizations create, showing a monetized return on a social investment. For example, if we consider what might be one of the social returns on investment of the San Francisco School, a financially self-sufficient school for youth from marginalized rural communities in Paraguay, .........

It is important to understand, however, that the monetized social outcomes and "return" that are measured by SROI are actually outcomes, not impact. From a program evaluation standpoint, SROI misuses the term "impact" and we would be more accurate if, when using SROI, we it referred to social returns or social outcomes, rather than social impact.

Social Performance

In the social sector we are also increasingly coming upon the term "social performance". It is not surprising that the term is defined differently by different organizations. For example, The Social Performance Task Force (SPTF) - an international group composed of investors, donors, microfinance institutions and networks, research agencies, and other stakeholders united in the goal of defining, measuring, and improving the social performance of microfinance institutions - defines social performance as a description of the processes that organizations implement in order to generate positive outcomes, as well as the outcomes generated. , Some social performance tools also focus on activity outputs as well as processes and outcomes. Still other organizations, such as Microfinance Gateway, see social performance as a much broader term that includes impact as well:

Social performance encompasses the entire process by which impact is created. It includes analysis of an institution's declared objectives, the effectiveness of its systems and services in meeting these objectives, related outputs (such as reaching larger numbers of very poor households) and success in effecting positive changes in the lives of clients (impact).

Clearly, whenever a discussion of social outcomes, social impact, and social performance is undertaken, all parties involved in the discussion must agree upon some operational definitions of these terms in order to avoid a "Tower of Babel" discussion! The definitions proposed in this text may be a starting point for such a discussion.

For the purposes of this course, in order to have clarity, we will adopt a broader definition of the term "social performance", to include "the entire process by which impact is created", which to my mind includes processes, outputs, outcomes, and the impact of an initiative.

Designing Outcome Evaluations: Determining Change in the Target Population

Let's take a closer look now at outcomes evaluations.

STEP 1: Determine Which Outcomes To Measure

The first step in designing an outcomes evaluation is to determine which outcomes to measure. This is where the Theory of Change or Logical Framework we discussed earlier comes into play: evaluators should refer to the intended program outcomes (as defined by the organization in its Theory of Change, or Logical Framework) to pinpoint the relevant outcomes to measure. As we have mentioned before, if the organization has not explicitly outlined the program theory of its initiative on paper, it is important to do so now before embarking on an outcomes evaluation. In this case, sitting down with decision makers, staff, and key stakeholders of the organization to articulate the Theory of Change of the initiative would be an important preliminary step. This centerpiece helps guide the evaluation in terms of what the intended program outcomes and mission are - "aligning the metrics with the mission". (We will take another look at the Logical Framework a little later in this chapter),

When designing the initiative, the social mission organization should have come to a consensus with key stakeholders on the definition of criteria that would describe outcomes. In other words, when we ask whether a social initiative's performance is "good enough", what do we mean by "good enough"? (To refresh you memory about developing criteria to describe outcomes, see Chapter 1 of this course: Chapter 1: Formulating Evaluation Questions - The Heart of the Evaluation Process)

Th outcomes should be:

Concrete (i.e. completion of graduation requirements as outlined by the Ministry of Education)

Have observable indicators (i.e. in the case of the San Francisco School, courses taken and student grades, as observed in student records)

Specify the level of accomplishment considered "successful" (i.e. 85% of participating students have completed the requirements to graduate established by the Ministry of Education), and

Specify a time-frame (i.e. at month 24 of the initiative)

Note that the criterion used to define "successful" in this case is "85% of participating students have completed the requirements to graduate established by the Ministry of Education".

To help guide you in determining which outcomes to measure (and formulate the evaluation questions) the following steps are also very useful, as mentioned in Chapter 1:

a. Identify the decision makers (often the evaluation sponsors) and key stakeholders who can benefit from the evaluation results.

b. Determine what kind of information these decision makers and stakeholders need or want (much of this may be available on the Logical Framework of the initiative, if one exists)

c. Determine how the decision makers will use the results of the evaluation (so you can develop questions that will help them use the information as they wish)

d. Evaluate and analyze, on your own, the initiative (what was the original objective of the initiative was, and whether it is being accomplished or not), and use what you learn from your analysis to formulate appropriate and relevant evaluation questions.

The logical framework of the initiative (if it exists), will also help the evaluator carry out step "d" above.

One thing to keep in mind is that outcomes measurement is increasingly important for social mission organizations who face not only the pressure of achieving their social mission but also external pressures: a competitive environment among nonprofits for funding, more stringent government regulations, and the demand for greater transparency and accountability on the part of the community, stakeholders, and social investors. As a result, outcomes measurement must address outcomes for the purposes of different stakeholders: financers, donors, and the public, in addition to the mission-driven program managers and beneficiaries.

In determining the outcomes to be measured, therefore, it is helpful for the evaluator and stakeholders to group them according to four categories:

1) ProgramCentered Outcomes

2) Participant-Centered Outcomes

3) Community-Centered Outcomes, and

4) Organization-Centered Outcomes

The following chart, adapted from the Standard Framework of Nonprofit Outcomes can serve as a guide for developing, in coordination with the evaluation sponsors, the organization's decision makers and key stakeholders, the outcomes to be measured during the evaluation. This is not an exhaustive list, nor do all of the outcomes need to be addressed during an evaluation.

Program-Centered Outcomes

Reach

Outreach

% target population enrolled

% target population aware of service

Participation rate

Number of services requested per month

Reputation

No. favorable reviews and awards

No. community partnerships

Customer satisfaction rate

Access

% target population unable to access services or denied service

Participation

Attendance/utilization

Acceptance rate

% of participants enrolled in multiple program activities

Attendance rate

Average attendance rate at special events

Number of members/subscriptions to newsletters

% of subscribers who are also donors

Engagement

% participants who continue with program

Participant dropout rate

% of participants who are considered active

Referral rate from participants

Graduation/Completion

% of participants who complete the program

% of participants who self report needs met

Average length of program participation

% of participants who move on to next level (within the program or to another outside program)

Satisfaction

Quality

No. of favorable reviews and awards

Participant and stakeholder satisfaction rate

Fulfillment

% participants reporting needs met

% of target population served

Completion rate

Participant Centered Outcomes

Knowledge

Skills and concepts learned

% whose score on skills test related to program activities improved

% who self report that their knowledge and skills on topics taught improved

Attitude

% showing improvement as reported by a third party not involved in the organization: parent, teacher, coworker, other program participant, respected community member, or family member

% of self-reported improvement

Qualification

(need to fill in the rest)

(KO, you need to "walk the reader" through the table above...i.e. "When looking at the table above, if we use the Ikatu initiative as an example again, a Program-Centered outcome that might be relevant to measure could be X, a Participant-Centered outcome that might be useful to measure might be X, a Community-Centered outcome that might be appropriate to measure might be X, and an Organization-Centered outcome that might be relevant could be X.")

Identifying outcome indicators

Comparing outcomes to baseline study

Testing the degree to which indicators have been reached

(The above 3 topics need to be developed)

STEP 2: Formulate the Outcomes Evaluation Questions

Once the evaluator, evaluation sponsors, and key stakeholders have reached a consensus on what outcomes to measure, the next step in designing an outcomes evaluation is formulating the evaluation questions. (You will recall from Chapter 1 that it is very important that evaluators

not rely solely on input from the evaluation sponsors and key stakeholders to get input to develop the evaluation questions. Just as important is the experience, objectivity, and external view of the evaluator when he or she develops the evaluation questions. This is true for several reasons - for example, stakeholders may be so involved in the nuts and bolts and day to day operations of the program that they may not "see" things that an outsider can see.)

As discussed in Chapter 1, typical questions about Program Outcomes (Outcomes Assessment) include the following:

Are the outcome goals and objectives being achieved?

Do the services have beneficial effects on the recipients?

Do the services have adverse side effects on the recipients?

Are some recipients affected more by the services than others?

Is the problem or situation the services are intended to address made better?

At this point I strongly suggest that the reader review again closely Chapter 1 of this course,: Formulating Evaluation Questions - The Heart of the Evaluation Process,

as it discusses in detail how to formulate good evaluation questions that are:

Relevant

Appropriate

Answerable

Clear

Concrete

Specific

Realistic

Convey performance standards and performance dimensions

As you formulate the evaluation questions, keep in mind Rossi's advice, as discussed in Chapter 1 as he stresses how important it is that the questions have measurable performance dimensions:

For an evaluation question to be answerable, it must be possible to identify in advance some evidence or "observables" that can realistically be obtained and will be credible as the basis for an answer. This generally means developing questions that involve measurable performance dimensions...

Rossi, pp. 83-84

STEP 3: Determining How to Answer the Evaluation Questions

Once the evaluation questions have been carefully formulated and prioritized - using the steps described in this chapter, and in particularly in Chapter 1 of this text - the evaluator is ready to move on to the next step: determining, in collaboration with the evaluation sponsors and the organization, how the evaluation questions will be answered. As we discussed in Chapter 1, there are various research methods available to the evaluator, including:

focus groups

one on one interviews

surveys

direct observation

surveys

self reporting of beneficiaries

self reporting of the organization being evaluated

budget implementation

financial documents

publicly available statistics and statements

control groups

In Chapter X we will discuss, in detail, some of these methods of data collection. In general, however, we can say that these data collection methods should be:

1. Systematic - the data should be obtained in a standard format.

2. Pre-tested - the methods and instruments used to collect data should be tried out to see if they generate the information you seek, before implementing them in the evaluation process.

These requisites are especially important for the first six research methods listed above.

The next steps in the outcomes evaluation process will be:

STEP 4: Gathering Data to Answer the Evaluation Questions

STEP 5: Analyzing and Interpreting the Data Collected to Answer the Questions

STEP 6: Reporting and Interpreting the Evaluation Results

STEP 7: Encourage the Organization to Use the Results to Adapt and Learn

Although Steps 4 through 7 will be treated in more detail in later chapters of this course, for the purposes of this chapter, we should highlight the following points to keep in mind:

(use Margoluis pp. 156 to 178 to summarize important thoughts on Gathering Data

(Use Margoluis pp. 179 to 219 to summarize important thoughts on Analyzing and Interpreting Data, and Reporting the Evaluation Results.)

(Use Margoluis pp. 221 - 231 to summarize important thoughts on encouraging the organization to use the results to adapt and learn.

Avoiding bias: Avoid sending field staff to interview clients that they work with because it will taint the response. However, you may send field staff to introduce the clients to the surveyor so that the participants feel comfortable. If field staff work in different locations, bias can be avoided by sending staff from one region to conduct surveys in another.

Designing an Outcomes Assessment for Ikatu

As an example, let's imagine that we are designing and implementing an outcomes assessment for the Ikatu Poverty Elimination Initiative in Paraguay. You will recall that the mission of the Ikatu initiative is to......

As we mentioned above, the preliminary steps below could be followed to design the evalution:

Identify the decision makers (often the evaluation sponsors) and key stakeholders who can benefit from the evaluation results. (state whose these might be for Ikatu)

Determine what kind of information these decision makers need or want (much of this may be available on the Logical Framework of the initiative, if one exists) (State what this might be for Ikatu)

Determine how the decision makers will use the results of the evaluation (so you can develop questions that will help them use the information as they wish) (State how the results would be used for Ikatu)

Evaluate and analyze, on your own, the initiative (what was the original objective of the initiative was, and whether it is being accomplished or not), and use what you learn from your analysis to formulate appropriate and relevant evaluation questions.

Then we would carry out the following steps:

STEP 1: Determine Which Outcomes To Measure

STEP 2: Formulate the Outcomes Evaluation Questions

STEP 3: Determining How to Answer the Evaluation Questions

STEP 4: Gathering Data to Answer the Evaluation Questions

STEP 5: Analyzing and Interpreting the Data Collected to Answer the Questions

STEP 6: Reporting and Interpreting the Evaluation Results

STEP 7: Encourage the Organization to Use the Results to Adapt and Learn

Let's look at how we would carry out these steps, one by one.

STEP 1: Determine Which Outcomes To Measure

You will recall above how we discussed the importance of referring to the Logical Framework of an initiative in order to determine the intended program outcomes, in order to pinpont which outcomes would be relevant and appropriate to measure.

Below is a sample Logical Framework for the Ikatu Initiative which we can use to develop the outcomes we want to measure. Using this framework, the evaluator should identify desired outcomes, which need to be:

Concrete

Have observable indicators

Specify the level of accomplishment considered "successful"

Specify a time-frame (i.e. at month 24 of the initiative)

(KO, in the Logical Framework below, the level of accomplishment considered "successful " seems to be missing)

Example using Ikatu Logframe: developing indicators, identifying outcomes to measure, developing research questions; using the Objectively Verifiable Indicators as a starting point. Add some program oriented outcome measures.

Intervention Logic

Objectively Verifiable Indicators (OVI)

Means of Verification (MOV)

Assumptions

Goal

Women are empowered to take control over several factors that cause poverty and have improved health, income, education, self-esteem, and ability to organize the community for social change.

Goal OVI

-Women are active in additional community organizations and initiatives

- Women identify changes they have made that led to their personal accomplishments

Goal MoV

-Individual interviews with participants

-Loan officer reports of individual progress

Impact (long term)

- Women improve in their levels of poverty as determined by 50 poverty indicators

- The group business generates income and is sustainable

- Women increase their income levels

- Committee distributes earnings from group business to committee members

Purpose OVI

- Individual improvements in at least 3 of the 5 areas of poverty indicators

- Committee improvements in 3 of the 5 areas of poverty indicators

- Women are making changes to their business practices (keeping records of accounts, buying in bulk, selling different products)

- Women receive earnings from group business

Purpose MoV

- Poverty evaluation at 1 and 3 years is tested against the baseline survey

- Committee demonstrates their business at the regional meeting of committees

- Verification of the treasurer's records

Assumptions

- Behavior changes will lead to improvements in poverty levels

- Committee members will desire to work together over the course of a year

- Funds will be well-managed and kept safe from theft

Outcome (short term)

- Women adopt behaviors learned in the workshops

- Women achieve one of their individual goals

- Women's committee raises money through the group business

- Women establish or strengthen individual businesses

Outcome OVI

- Number of women who have adopted behaviors learned

- Number of women who have achieved stated goals

- Total amount of profit each committee has generated

Outcome MoV

- Surveys of participants during year end evaluation

- Verification of committee treasurer's records

Assumptions

- There is local demand for the committee's business

- Women can adopt new behaviors through training

Outputs (immediate)

- Women participate in the weekly workshops

-Women designate goals for improving their businesses, personal goals, and group goals

-Women design a plan for a group business

Output OVI

- Percent of participants at weekly meetings

- Goals statements and individual calendars with personal plans of action

- Poster outlining the group business plan

Output MoV

- Loan officer reports detailing meeting minutes

- Examples of goals statements, calendars, and business plans

Assumptions

- The women are entrepreneurial and desire to improve their incomes through improving their businesses.

Activities

- Weekly capacity building workshops with women's committees

- Formation of mentor partners with women

- Access to savings and loans in the microfinance program

- Business competitions among women's committees participating in the program.

Inputs

- Loan officer staff time

- Posters, games, and materials for the meetings

- Gasoline for transportation to communities

Budget

- Outline the costs of the materials, transportation, and staff time

Assumptions

- Environment, health, and infrastructure will not prevent women from participating in weekly workshops.

Needs

Women who participate in the microfinance program have not moved out of a multidimensional conception of poverty (income, heath, living situation, education, community participation, and self-esteem). Access to savings and loans is not enough to move them out of poverty.

Basic Preconditions

Committees that demonstrate solidarity and disposition are selected to enter into the IKATU program based on group solidarity and disposition. The committee carries out a group activity before entering the program to demonstrate solidarity.

(Here you would walk the reader through the determination of the outcomes to measure, based on the Ikatu Logical Framwork (logframe) and discussions with the evaluation sponsors and key stakeholders, and then walk the reader through how he or she might apply the following steps in the case of Ikatu, giving examples for each one, i.e. sample evaluation questions, sample plan of how to answer the evaluation, examples of how the data to answer the questions might be gathered, and how the data might be analyzed, interpreted, and reported. And finally, how the evaluator might encourage the Ikatu initiative to use the results to adapt and learn (all of these steps are discussed above).:

STEP 2: Formulate the Outcomes Evaluation Questions

STEP 3: Determining How to Answer the Evaluation Questions

STEP 4: Gathering Data to Answer the Evaluation Questions

STEP 5: Analyzing and Interpreting the Data Collected to Answer the Questions

STEP 6: Reporting and Interpreting the Evaluation Results

STEP 7: Encourage the Organization to Use the Results to Adapt and Learn

Some Final Considerations to Keep in Mind:

Challenges of Practitioner-Led Client Assessment (KO, these are considerations to keep in mind when it is an internal evaluation, or the practitioner is helping the evaluator to collect data, although the comments below on Focus and Attribution applies to any kind of evaluation.)

Focus: The assessment must focus on the most critical program components and desired impact upon its target population. The evaluator must work with key stakeholders to identify program priorities before conducting the evaluation.

Skills: Staff members conducting the evaluation should be thoroughly trained and practice administering the evaluation before conducting the evaluation in the field.

Objectivity: Four measures can help reduce the subjectivity in the data collected: 1) well-trained and supervised staff, 2) field staff conduct the evaluation but in different communities so as no not survey the clients with whom they directly work, 3) random sampling of sites and clients for the evaluation, and 4) data is reviewed and revised in a quality control process, both in the field and in the office before data analysis.

Attribution: Instead of trying to prove causality through complex and expensive statistical studies, comparing groups who participated in the program with those who did not allows evaluators to draw credible associations between the program and the outcomes and impact perceived.

Avoiding bias: Avoid sending field staff to interview clients that they work with because it will taint the response. However, you may send field staff to introduce the clients to the surveyor so that the participants feel comfortable. If field staff work in different locations, bias can be avoided by sending staff from one region to conduct surveys in another.

Impact Evaluations: Determining Actual Cause and Effect in a Social Initiative

Up to this point we have been discussing how we can determine if the changes we seek have occurred in the target population of a social initiative. But a nagging question remains: how can we know if the interventions of the social initiative are what actually caused the change? This is an important question because...

As we discussed earlier in this chapter, there is a crucial difference between Outcomes Evaluations and Impact Evaluations: (Remind the reader here again of the difference between Outcomes Evaluations and Impact Evaluation, and define Impact Evaluation clearly.)

Explanation as to why impact evaluations should not be conducted by those without training - because you are not truly measuring impact.

(Now describe the steps in designing an Impact Assessment, just like we walked the reader through the steps of designing an Outcomes Assessment above). Then either develop an example, for example, with Ikatu, or refer readers to an actual impact assessment that they can read that you have placed in an appendix) Components of an impact evaluation: control group, randomization

Creating the Counterfactual

"The hardest part of any evaluation is how to quantify the counterfactual. Any retrospective evaluation involves asking whether one could have achieved better results if one had done it some other way, and it is obviously very difficult to be sure of what would have been the outcome of an alternative strategy." -Montek Singh Ahluwalia, Deputy Chairman, Planning Commission, India

According to the Center for Global Development, the only limiting factor in designing an impact evaluation is when a counterfactual cannot be created.

Randomized Control Trials

Division of the treatment and control groups.

Randomized evaluations: divide your target recipients or villages randomly into two groups before giving loans

If your sample is large enough, the two groups will be identical on observables AND non observables

Avoiding bias in Impact Evaluations

How to evaluate whether the study contains biases: points of bias to consider

The importance: example of how not including dropouts affects impact assessment results of an MFI program (Microfinance Impact: Bias from Dropouts)

Impact Evaluations as a source of current debate for social mission organizations, particularly for International Development organizations.

Standards for quality impact evaluations, promoted by USAID (3ie)

The debate over measuring impact: Should social entrepreneurial organizations spent time and funding to prove impact, or simply focus on setting a sound Theory of Change and monitoring program processes? (Will We Ever Learn?)

Example: debate over the Millennium Villages

The debate over which impact indicators should be measured: financial versus social return (Mirror Mirror, from Skoll Foundation Website)

Impact Evaluations as a source of current debate for social mission organizations, particularly for International Development organizations.

Standards for quality impact evaluations, promoted by USAID (3ie)

The debate over measuring impact: Should social entrepreneurial organizations spent time and funding to prove impact, or simply focus on setting a sound Theory of Change and monitoring program processes? (Will We Ever Learn?)

Example: debate over the Millennium Villages

The debate over which impact indicators should be measured: financial versus social return (Mirror Mirror, from Skoll Foundation Website)

Conclusion: "So What" in terms of my organization

aware of the difference: when each evaluation is appropriate

(before we change the world with our initiative, let's prove causality)

let's see what outcomes we are achieving and which we are not meeting

aware of different tools

questions to keep in mind

A conclusion needs to be written, reminding the reader of the main points of the chapter, and making a statement about how potential evaluators, as well as social mission organization decision makers and staff, can use the assessment tools discussed in this chapter to help an organization reflect, learn, and adapt in order to improve its performance.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.