When applying for my PhD I had to provide a research proposal. When I wrote my proposal, I wrote it for people who were economists but who were not familiar with my research area, causal inference. Since being accepted, many people have asked me what my PhD is about and these people have had a variety of backgrounds with differing knowledge of statistics and impact evaluation. This first post is an attempt to explain my proposal to someone with no background knowledge, e.g. my grandma. First, I’ll briefly explain randomised controlled trials and why they are useful, and then I’ll explain how I’m going to approach my main research question, ‘Do we need randomised controlled trials?’.
We often care about measuring the impact of some policy or ‘treatment’, for example, measuring the effect of a job training program on future wages. One common method for doing this is randomised controlled trials. You might be more familiar with them in a medical context, for testing the effect of different drugs or other medical treatments. With randomised controlled trials, some people are randomly assigned to receive the job training program (the treatment group), while others do not receive the training (the control group). Since we randomly assigned people to receive the training, we can measure the impact of the job training program by comparing the wages of people who received the training with those who did not.
Imagine we did not randomly assign the training and instead just let people choose to attend the training or not. It’s likely that more motivated people will be more likely to attend the training in this case. Also, it’s likely that motivation has a direct effect on wages too, with more motivated people having higher wages. Now if we just compared the people who attended the training with those who did not, we wouldn’t just be measuring the effect of the training, we’d also be capturing the effect of being more motivated. This is known as the problem of selection bias, as different types of people selecting into the treatment biases our result. The power of the randomised controlled trial comes from the fact that randomisation makes it highly likely that characteristics such as motivation are similarly distributed in both the treatment and control group. This means that any difference in wages between the two groups must be because of the job training program, and not because of other differences between them. If you want to know more, here’s a really accessible article from the New York Times on workplace wellness programs and how randomised controlled trials provide a more convincing measurement of their impacts.
Now, my research is focused on answering the question, do we need randomised controlled trials? We can break this down into two sub-questions. Firstly, do we need to evaluate the program to know what its impact is? And secondly, if we decide we do need to evaluate the program, do we need to evaluate it with a randomised controlled trial or is an alternative method for impact evaluation suitable?
There are a couple of reasons why we could think we don’t need to run an impact evaluation of a program. Firstly, we might have very accurate intuitions (or priors) about what the impact of the program is. This might be because the program is simple and its effect is obvious, or maybe there have been lots of evaluations of similar programs done previously. Alternatively, we might not have accurate intuitions but perhaps we could work it out by looking at the previous evaluations and applying some statistics and economic theory. So one thing I’ll be looking at is under what conditions people have accurate intuitions about the effects of programs. For example, I’ll look at whether people have better intuitions about the effects of certain types of programs (e.g. job training vs cash transfers), and whether people have better intuitions for the impact in different places and contexts (e.g. the US vs India). I’ll also try and explore different statistical techniques for predicting the impact of a program and see which of these techniques work well.
Now, if we don’t think we can accurately predict the impact of a program then we have to do an impact evaluation of it. We could do a randomised controlled trial but these are often expensive and time-consuming and not always possible or ethical. Other methods (e.g. regression, matching, or difference-in-difference) are often cheaper and easier to implement but we might be more concerned that they suffer from the problem of selection bias as described above. What I will do is measure how big a problem selection bias is and how well these other methods are able to correct for it, in other words, how far they are away from the true effect. People have done this for individual studies before, but I will do it in a systematic way for many different studies. By doing this systematically, I’ll be able to explore under what conditions selection bias is a big problem and when these other methods are able to correct for this selection bias. If they are able to successfully correct for the selection bias then there is no need to run a randomised controlled trial, whereas randomised controlled trials are necessary in cases where these other methods do not work well.
So that’s the simplified version of my research proposal. It may seem quite boring and technical but I think it is quite important. Impact evaluation research does influence policy so it is important to make sure we do it well. It can also be quite expensive so if we can use alternative methods to more cheaply get the same results, this frees up money to actually be used for implementing high-impact programs, instead of just evaluating them. I hope my research is able to improve the work of other researchers evaluating the effects of programs and policies all over the world.