Most ICAP publications are available online, and many have been translated in part or in full into several languages.

2. What Is Evaluation?

Policy Tools  Toolkits  Evaluation Toolkit  2. What Is Evaluation?

Evaluation offers a way to determine whether an initiative has been worthwhile in terms of delivering what was intended and expected. However, good evaluation can also answer other important questions. 

 

 

2.1       The Purpose of Evaluation

Evaluations of prevention programs fulfill a number of functions:

 

1.       Measure the program’s outcomes and impact

 

·         Did the program achieve its stated objectives?

·         Did it reach its intended audience?

·         Was the size of the outcome as expected?

·         Did the program have unexpected or unintended consequences?

·         Are outcomes consistent with those of similar programs?

 

2.       Inform future program planning and design

 

·         What are the strengths and weaknesses of a given approach?

·         What implementation problems have emerged?

·         Are measurement criteria appropriate and adequate?

·         Are confounding influences affecting outcomes (e.g., other interventions that may have been aimed at the same issue or target group)?

·         Have new ideas emerged, and can they be tested?

 

3.       Provide important internal lessons for those conducting programs

For example, evaluations can offer feedback on whether the expenditure of financial and human resources needed for the program was justifiable:

 

·         Were funds used properly?

·         Is there a return on investment?

 

4.       Ensure transparency and accountability

Particularly where outside funding has been used on an initiative, evaluations help provide justification for the project. They can also be used as a form of stakeholder engagement, helping to gain buy-in from local community members, local authorities, and target audiences.

 

·         Are suitable systems in place to ensure sound financial reporting, monitoring, etc.?

·         Have lessons been taken on board for future initiatives?

 

5.       Provide broader lessons about good practice

 

·         What lessons can be learned from this approach?

·         Are there lessons about policy options?

·         Do the results support existing evidence?

 

  

2.2       Types of Evaluation 

Most evaluations fall into one of three categories: 

  • process-based
  • outcomes-based
  • impact-based  

The choice of the most appropriate type of evaluation is guided by several factors, including the availability of resources and whether the evaluation is needed for internal or external purposes (see Section 3.1: Who Should Evaluate?):

  • Process-based evaluations are useful in assessing how an intervention is being implemented or whether it is producing the necessary measurements.
  • Outcomes-based and impact-based evaluations are best for tracking the results of an intervention.
  • Process assessment is likely to be useful internally, whereas the focus on outcomes and impact can help justify the intervention both internally and externally. 

Whichever evaluation model is used, data need to be collected in a systematic manner.  Data may be

 

·         quantitative (descriptive and subjective: e.g., counting the number of drink-driving fatalities or the percentage awareness of a risk)

·         qualitative (measurable and definable in absolute numerical terms: e.g., recording subjective views on whether a program has changed perceptions)

 

Successful evaluations often blend quantitative and qualitative data collection since there is usually more than one way to answer any given question (see Section 3.3: Data Collection).

 

 

2.2.1   Process-based Evaluations

 

Process-based evaluations are used to understand how a program works and delivers its results. 

They assess the activities that are being implemented and the materials that are used.

 

Process-based evaluations are intended to answer some of the following questions:

 

·         What is required to deliver the program in terms of resources, products, and services?

·         How are individuals implementing the intervention trained?

·         How are participants selected and recruited?

·         What are considered the program’s strengths/weaknesses?

·         What is the feedback from participants/partners about the implementation of the program?

 

 

2.2.2   Outcomes-based Evaluations

 

Outcomes-based evaluations are used to measure any changes immediately after program implementation and to establish that these changes have occurred in response to the intervention being evaluated.

 

Outcomes-based evaluations focus on the following questions:

 

·         Which outcomes are being measured (e.g., behavior change or change in knowledge or awareness) and why?

·         How will these outcomes be measured, specifically?

·         What is the desired proportion of participants who will have undergone a change as a result of the intervention? Has this number been reached?

 

To be successful, outcomes-based interventions require the following: 

  • detailed information on the indicators that can be used to measure the desired outcomes (the best indicators are those that can be verified from administrative databases, surveys, third-party reports, or official statistics—e.g., the number of individuals participating in the program)
  • a thorough assessment of how best to gather the necessary information—in other words, which methodology to use (see Section 3.3: Data Collection)
  • a reliable and rigorous method for analyzing and reporting findings (see Section 4: Data Analysis and Interpretation)

  

2.2.3   Impact-based Evaluations

 

By far the most complex and difficult to carry out, the impact-based evaluations examine the long-term effects of an intervention on participants: 

  • The most successful type of impact-based evaluation tracks effects over extended periods of time, rather than simply examining conditions immediately “before” and “after” the intervention has been implemented.
  • Impact-based interventions can be further enhanced by including a “control” or comparison group against which to measure the “exposed” group (i.e., the one that has received an intervention) (see Section 3.3: Data Collection).
  • Unfortunately, there can be confounding contributors to long-term “before” and “after” changes, aside from the program being evaluated. 

Impact-based evaluation also requires information about the conditions before the intervention was implemented.

 

·         For example, conducting an initiative aimed at alcohol-impaired driving will require that current statistics and information be available about drink-drive crashes and fatalities to provide a context.

 

NEXT Section 3: Undertaking Evaluations

 

Previous sectionIntroduction: Why Evaluate?