Most ICAP publications are available online, and many have been translated in part or in full into several languages.

4. Data Analysis and Interpretation

Policy Tools  Toolkits  Evaluation Toolkit  4. Data Analysis and Interpretation

The purpose of the data analysis and interpretation phase is to transform the data collected into credible evidence about the development of the intervention and its performance.

Analysis can help answer some key questions:

 

·         Has the program made a difference? 

·         How big is this difference or change in knowledge, attitudes, or behavior?

 

This process usually includes the following steps:

 

·         Organizing the data for analysis (data preparation)

·         Describing the data

·         Interpreting the data (assessing the findings against the adopted evaluation criteria)

 

Where quantitative data have been collected, statistical analysis can:

 

·         help measure the degree of change that has taken place

·         allow an assessment to be made about the consistency of data

 

Where qualitative data have been collected, interpretation is more difficult.

 

·         Here, it is important to group similar responses into categories and identify common patterns that can help derive meaning from what may seem unrelated and diffuse responses.

·         This is particularly important when trying to assess the outcomes of focus groups and interviews.

 

It may be helpful to use several of the following 5 evaluation criteria as the basis for organizing and analyzing data:

  • Relevance: Does the intervention address an existing need? (Were the outcomes achieved aligned to current priorities in prevention? Is the outcome the best one for the target group—e.g., did the program take place in the area or the kind of setting where exposure is the greatest?)
  • Effectiveness: Did the intervention achieve what it was set out to achieve?
  • Efficiency: Did the intervention achieve maximum results with given resources?
  • Results/Impact: Have there been any changes in the target group as a result of the intervention?
  • Sustainability: Will the outcomes continue after the intervention has ceased?

 

Particularly in outcomes-based and impact-based evaluations, the focus on impact and sustainability can be further refined by aligning data around the intervention’s

  • Extent: How many of the key stakeholders identified were eventually covered, and to what degree have they absorbed the outcome of the program? Were the optimal groups/people involved in the program?
  • Duration: Was the project’s timing appropriate? Did it last long enough? Was the repetition of the project’s components (if done) useful? Were the outcomes sustainable? 

 

4.1       Association, Causation, and Confounding

 

One of the most important issues in interpreting research findings is understanding how outcomes relate to the intervention that is being evaluated. This involves making the distinction between association and causation and the role that can be played by confounding factors in skewing the evidence.

 

4.1.1 Association

An association exists when one event is more likely to occur because another event has taken place. However, although the two events may be associated, one does not necessarily cause the other; the second event can still occur independently of the first.

 

·         For example, some research supports an association between certain patterns of drinking and the incidence of violence. However, even though harmful drinking and violent behavior may co-occur, there is no evidence showing that it is drinking that causes violence.

 

4.1.2 Causation

 

A causal relationship exists when one event (cause) is necessary for a second event (effect) to occur. The order in which the two occur is also critical. For example, for intoxication to occur, there must be heavy drinking, which precedes intoxication.

 

Determining cause and effect is an important function of evaluation, but it is also a major challenge.  Causation can be complex:

 

·         Some causes may be necessary for an effect to be observed, but may not be sufficient; other factors may also be needed.

·         Or, while one cause may result in a particular outcome, other causes may have the same effect.

 

Being able to correctly attribute causation is critical, particularly when conducting an evaluation and interpreting the findings.

 

4.1.3 Confounding

 

To rule out that a relationship between two events has been distorted by other, external factors, it is necessary to control for confounding. Confounding factors may actually be the reason we see particular outcomes, which may have nothing to do with what is being measured.

 

To rule out confounding, additional information must be gathered and analyzed.  This includes any information that can possibly influence outcomes.

 

When evaluating the impact of a prevention program on a particular behavior, we must know whether the program may have coincided with any of the following:

 

·         Other concurrent prevention initiatives and campaigns

·         New legislation or regulations in relevant areas

·         Relevant changes in law enforcement

·         For example, when mounting a campaign against alcohol-impaired driving, it is important to know whether other interventions aimed at road traffic safety are being undertaken at the same time. Similarly, if the campaign coincides with tighter regulations around BAC limits and with increased enforcement and roadside testing by police, it would be difficult to say whether any drop in the rate of drunk-driving crashes was attributable to the campaign or to these other measures. 

 

Addressing possible confounders is an important element for proper interpretation of results.

 

·         However, it is often impossible to rule out entirely the influence of confounders.

·         Care must be taken not to misinterpret the results of an evaluation and to avoid exaggerated or unwarranted claims of effectiveness. This will inevitably lead to loss of credibility.

·         Any potential confounders should be openly acknowledged in the analysis of the evaluation results.

·         It is important to state all results in a clear and unambiguous way so that they are easy to interpret. 

 

4.2       Short- and Long-term Outcomes

 

The outcomes resulting from an intervention may be seen in a number of different areas, including changes in skills, attitudes, knowledge, or behaviors.

 

·         Outcomes require time to develop. As a result, while some are likely to become apparent in the short term, immediately following an intervention, others may not be obvious until time has passed.

·         It is often of interest to see whether short-term outcomes will continue to persist over the medium- and long-term.

 

 

Evaluators should try to address short-, medium-, and long-term outcomes of an intervention separately.

 

·         If the design of a program allows, it is desirable to be able to monitor whether its impact is sustained beyond the short term.

·         Care should be taken to apply an intervention over a sufficiently long period of time so that outcomes (and impact) can be observed and measured.

 

Short- and long-term outcomes can be measured by using different methodologies for collecting data.

 

·         Cross-sectional studies involve measurement at a single point in time after the intervention has been applied and allow short-term results to be measured 

·         Longitudinal study designs, on the other hand, follow progress over longer periods and allow measurements to be taken at two or more different points in time. They can help assess outcomes into the medium- and long-term

 

Unfortunately, the reality is that, for most projects, resources and time frames available are likely to allow only for the measurement of short- and perhaps medium-term outcomes.

 

4.3       Providing the Proper Context

 

 Interpreting results is only possible in the proper context. This includes knowing what outcomes one can reasonably expect from implementing a particular intervention based on similar interventions that have been conducted previously.

 

For instance, when setting up a server training program, it is useful to know that such interventions have in the past helped reduce the incidence of violence in bars.

Therefore, once the intervention is over, if the results are at odds with what others have observed, it is likely that the program was not implemented correctly or that some other problem has occurred.

 

NEXT – Section 5: Reporting and Dissemination

 

PREVIOUS SECTION – Section 3: Undertaking Evaluations