Tag Archives: impact evaluation

Impact Evaluation in Conservation Science

This is the first blog in a weekly series exploring impact evaluation and building the evidence base in conservation science and policy.

Impact. It’s a term that we hear everyday in the news without batting an eye. Here are some headlines from just today:

http://www.bloombergview.com/articles/2015-09-11/human-impact-on-global-environment-may-be-peaking

http://www.bloombergview.com/articles/2015-09-11/human-impact-on-global-environment-may-be-peaking

http://www.usatoday.com/story/news/2015/09/11/house-votes-reject-iran-nuclear-deal-but-action-has-little-impact/72061716/

http://www.usatoday.com/story/news/2015/09/11/house-votes-reject-iran-nuclear-deal-but-action-has-little-impact/72061716/

http://dailycaller.com/2015/09/11/new-data-shows-legalization-had-no-impact-on-teen-marijuana-use/

http://dailycaller.com/2015/09/11/new-data-shows-legalization-had-no-impact-on-teen-marijuana-use/

But what does the word impact actually mean? How can we really measure the impact of one thing (let’s say, a low fat diet) on another (weight loss)? Aren’t there many other factors at play that could actually be causing the change? What if, while a subject begins a low fat diet, he also starts exercising more and counting calories? Those factors are likely to be important to the measured outcome (weight loss) as well and have to be “teased out” somehow. How can this be done? Scientifically, we use a technical process called impact evaluation. This process attempts to estimate the actual effect of a treatment or intervention (e.g. a drug) on an important outcome (e.g. weight). This approach is used regularly in the medical research community when testing the effects of drugs on patients. Researchers must carefully design the trial so that the populations taking the drug itself and those taking the placebo represent a random cross-section of people. This helps to ensure that the two groups do not differ significantly in other ways that could affect the outcome. For example, if the treated (drug-taking) group in the trial contains a disproportionate number of individuals that have hypothyroidism (which is associated with low metabolism), this will bias the results. The hypothyroidism-group may be inherently less (or perhaps more!) likely to respond to the effects of the drug. Although the drug could indeed have an effect, it would not be effectively measured by trying it on this non-random sub-sample of the population. Through random assignment of participants to the treatment and control groups, researchers ensure that the groups are not inherently different from each other. This way, the groups are fairly comparable to each other and researchers can compare apples to apples.

How a randomized control trial works http://library.downstate.edu/EBM2/2200.htm

How a randomized control trial works. From http://library.downstate.edu/EBM2/2200.htm

The randomized control trial is considered the gold-standard in scientific research. Because of the randomization, the evidence these studies produce provide solid estimates of the impact of treatments (e.g. drugs) on relevant outcomes (e.g. weight loss). However, in conservation and policy evaluation, it is nearly impossible to design a randomized experiment. For instance, would it be possible to test the effects of a Payments for Ecosystem Services plan using a randomized control trial with control and treatment (e.g. paid) groups? This would be a highly inequitable public policy, as the benefits of the payment would only benefit certain people. What about the evaluation of a protected area using randomization? Is it possible to randomly protect some areas and not others and then measure the outcomes? Of course, this approach would politically and practically impossible.

To get around these constraints, conservation scientists have borrowed from econometrics and have implemented approaches known as quasi-experimental methods for impact evaluation. Because we cannot directly experiment with conservation policies, we instead must employ a roundabout approach – hence the “quasi” term. Using sophisticated statistical techniques, conservation scientists attempt to simulate a randomized control trial. These approaches have been applied to evaluate protected areas and community-based conservation projects alike. The main idea is that these approaches attempt to estimate the counterfactual – what would the outcome have been in the absence of the policy?  We cannot directly observe this, so we can use statistical approaches to estimate it. What are some examples of these approaches? I’ll review three here: matching, instrumental variables, and structured equation modeling.

For example, with an approach called matching, treated groups are compared with select control groups that are as similar as possible to the treatment groups. In other words, they are matched up with each other. This matching process is designed to eliminate or reduce selection bias to the extent possible. Selection bias is essentially a converse of randomization: certain localities are more likely to be selected for a policy treatment (such as a protected area). By using matched controls with similar characteristics, the selection bias can be reduced. Imagine a simple matching situation: you want to estimate the impacts of a protected area on forest cover. To select matches, you’ll use covariates – variables that correlate with the treatment assignment of protection and the outcome variable of forest cover. One example of a covariate is distance to roads. (There are many other possible covariates, but we can just explore one in this simple example.) To select the appropriate matched control, simply choose a locality that is unprotected with the closest possible value for the distance to roads.

matching example

In reality, the analysis would include many covariates that would need to match as closely as possible between treated and control groups. Adding more covariates is not only necessary to reduce selection bias, but also may make it more difficult to identify close matches.

A second approach to try is called structured equation modeling (SEM). This method is somewhat similar to matching, as it employs covariates to reduce selection. However, SEM also allows the inclusion of extra components other than the treatment, called mediating factors, that may also contribute to the outcome. Using this method, it is possible to identify mechanisms – exactly how and through what pathways the treatment affected the outcome. In addition, the interactions between mediating variables can be taken into account – this is an advantage of SEM over matching, which leaves out those interactions.

SEM

Another approach to consider employs instrumental variables. In this case, you use an instrument – a variable that affects the probability of treatment but does not affect the outcomes except through the treatment. In other words, the instrument is not correlated with unobserved confounding variables. The advantage here is that it reduces selection bias and only focuses on locations where the treatment variable is not “contaminated” with unobserved confounders. However, it’s difficult to identify instruments in practice that actually work. One example of an instrumental variable that has been used in protected area evaluation is the distance to rivers. The idea here is that protected areas tend to be located near rivers; however, the distance to rivers is unlikely to correlate with an outcome variable such as forest loss or change.

instrumental variables

These are just three examples of evaluation approaches that can be used in conservation science. The appropriate methodological approach for your research will depend on important factors, especially the availability of data and scale of analysis.

For more information about impact evaluation in conservation science, check out these useful references:

Baylis, K., Honey-Rosés, J., Börner, J., Corbera, E., Ezzine-de-Blas, D., Ferraro, P. J., Lapeyre, R., Persson, U. M., Pfaff, A. and Wunder, S. (2015), Mainstreaming Impact Evaluation in Nature Conservation. Conservation Letters. doi: 10.1111/conl.12180

Ferraro, P. J. (2009), Counterfactual thinking and impact evaluation in environmental policy. New Directions for Evaluation, 2009: 75–84. doi: 10.1002/ev.297