If you’re even mildly interested in RCTs for international development, you’ve probably seen Lant Pritchett’s post on development as a faith-based activity by now, as well as Chris Blattman’s insightful reply. I had an interesting conversation about this with Michael Clemens, Gabriel Demombynes, and Rohit Naimpally on Twitter today, which was useful in helping to parse Lant’s views more closely. (Storified in case anyone would like to read along). But what this discussion really made me think further about was the way in which RCT results have become a privileged type of knowledge in development. I’m still a big supporter of using RCTs to compare the effectiveness of different development programs, but the point remains that this type of information is largely produced by academics in high-income countries, for major aid donors from high-income countries. And I think this raises some major questions of voice and agency in international development that don’t usually come up in discussions about whether RCTs are worthwhile.
As an example of the latter debate, Evidence Matters had a thoughtful post recently about “how much evidence is enough.” They made the excellent point that even well-conducted studies aren’t generalizable on their own, and that replication and systematic reviews should be the minimum standard for claiming to have verified the impact of a development program. Of course, even great results from a worldwide replication aren’t sufficient to ensure that policymakers actually pay attention to them, and hence we also have people like Heather Lanthorn and Suvojit Chattopadhyay thinking critically about how policymakers work, and when evidence is likely to get used.
All really good stuff, which, if done well, should ideally increase the supply of effective development programs in the world. And yet, whose voices come out in this? Comments from individual users of development programs rarely make their way into quantitatively-oriented RCT results (I see signs that this is starting to change, but still very slowly). And if they do get to voice their opinions, those users – whether favela residents in Sao Paulo or smallholder farmers in Mali – don’t effectively have any say on whether the program is continued, or whether it was remotely close to the type of program they wanted for their town in the first place. Working towards program effectiveness via RCTs is very useful, and it generally doesn’t touch on these political questions about whether impoverished people get to make these important decisions about their own lives in the first place. (I am using “and” as the conjunction here instead of “but” quite purposefully. I think both facts are true; they don’t cancel each other out.)
There’s obviously no easy way to empower everybody and bring truly inclusive democracy to the people who systemically get excluded in every country – the poor – in the short term. And even in an inclusive democracy, there would still be a great place for RCTs, because there will always be questions about which design of a social program is more effective. But I think development practitioners, and especially randomistas, need to think much more critically about making sure that the push for evidence doesn’t displace opportunities for citizens of low-income countries to have a real say about the type of “development” they’re participating in.