Part of making science more open is taking our pre-existing ways of disseminating and practicing science, as seen in journals and statistical programs, and making them open. But there is a larger change taking place. Domains previously more reliant on argumentation and advocacy are now starting to equip themselves with the methodological toolkits us scientists are so familiar with.
For instance, we are steadily seeing support grow for the introduction of Randomised Controlled Trials (RCTs) in social policy. A key distinguishing feature of RCTs is that subjects, following the recruitment and assessment process, are randomly allocated to receive the treatments under study. Through randomisation we’re able to minimise the amount of bias in allocation when sampling from a population. As you might already know, RCTs pretty much form the backbone of medical testing: here, you compare two or more interventions, say drug A [control placebo], drug B [current market drug] and drug C [a new drug], and then determine which more effective in a controlled environment.
If we trust this method with producing drugs, then why aren’t we seeing it implemented for social policy? We’ll come to that in a moment. For now, I just want you to watch a quick clip of Neil DeGrasse Tyson who, in appearing on Bill Maher’s show, nicely summed up the mode of thinking that takes place in political circles:
Well, as I said, the tide is turning, and we can best witness this change in the recent publication of Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials. Not only does it offer an introduction to what RCTs are and how to do them, it also provides several instances of where RCTs have already been profitably applied to social policy. They also provide some nice diagrams (see below). One great example they provided was a study by Brooks et al. (2008) that performed a RCT of incentives to improve attendance at adult literacy classes. What they found was not only did the incentive not show any significant improvement in attendance when compared to the non-incentive group, but that participants receiving incentives actually attended approximately two fewer classes per term.
Such seemingly counterintuitive results highlight an important point: what we think might be good ideas for social policy could turn out to have unintended, negative consequences. Science taught us long ago that resting our laurels on a priori reasoning and intuition simply won’t cut it. We know far less about the world than what we do know, and the scientific process is the best available method for testing our hypotheses and theories against observations.
Still, even though I’m a bit of a cheerleader for RCTs, we need to know when to use them, and there are certainly instances where they won’t help. My main point is that we should strive to adopt a more scientific and empirical approach to social policy and, if we’re going for broad generalisations, in our everyday lives.
Brooks, Greg (2008). Randomised controlled trial of incentives to improve attendance at adult literacy classes Oxford Review of Education, 317(5), 362-504 DOI: 10.1080/03054980701768741