Making x-centric less eccentric

Lant Pritchett’s latest post about the limits of randomized controlled trials in development economics has been making the rounds of the small universe of people who care deeply about randomized controlled trials for a few weeks.  His critique, of course, is that there’s a fad for examining whether “intervention X affects outcome Y” (or “x-centric” research), but researchers often give too little attention to whether the proposed intervention is feasible and cost-effective outside the context of an academic study.

This line of criticism isn’t new, and most people I know who do development RCTs would probably agree with it.  There’s a lot of work already underway to remedy some of these shortcomings.  To take several of Lant’s points in order:

“X-centric can become eccentric by being driven by statistical power.”  Lant’s point here is that many questions we might care about, such as why China grew so rapidly after the 1970s, or why some countries have better educational outcomes than others, aren’t amenable to randomization.  This is very obviously true, and I don’t know a single person who argues that RCTs are the only valid research method for every question in economics. As the graph below shows, RCTs are still a minority of all published research in the discipline. There’s also a lot of interesting case-based research that addresses these issues, although you sometimes have to go next door to political science to find it.  Two examples that come to mind are Douglass North, Jim Wallis, and Barry Weingast’s work on the institutional prerequisites for economic growth, or Stephen Kosack on the politics of education in Taiwan, Ghana and Brazil.

The image shows a graph demonstrating that RCTs are still a fraction of all published papers in most economics journalsGraph via David McKenzie

“X-centric can become eccentric by never asking how big.”  The idea here is that many published development RCTs have results which are statistically significant, but substantively small.  For example, a study might report the headline result that tutoring improves students’ test scores — but the substantive impact might only be a difference of one percentage point.  This is definitely a challenge, and I think it’s exacerbated by economists’ tendency to present their results to non-specialists using statistical terms of art (like standard deviations) rather than more straightforward measures (like percentage point changes in test scores).  One organization that is taking some good steps towards comparing impact size across interventions is AidGrade, which has built an online tool for anyone to carry out their own meta-analysis of aid effectiveness.

“X-centric can become eccentric by ignoring external validity.”  This is the issue addressed by Evidence in Governance and Politics’ Metaketa Initiative, which offers funding for clusters of studies which examine similar interventions in different countries.  Current projects focus on questions of information and accountability, taxation, natural resource governance, and community policing.  There are also one-off initiatives like IPA’s series of Ultra Poor Graduation pilots, which replicated the same social protection intervention in seven different countries.

“X-centric can become eccentric by ignoring implementation feasibility.”  I find this critique a bit curious because it assumes we know ex ante which types of interventions will or won’t work in a given context.  One could easily assume that it wouldn’t be possible to provide biometric identification for 99% of Indian citizens, or get 94% of children in Burundi into primary school — twenty percentage points higher than the regional average, in one of the poorest countries in Africa.  But there is a valid point here that simply knowing that an intervention is effective doesn’t automatically translate into the political will to implement it on a large scale.  Organizations like Evidence Action and Evidence Aid are tackling this challenge by working with governments and NGOs to share information about successful interventions and scale them up.  Rachel Glennerster and Mary Anne Bates of JPAL have also created a new framework for assessing when an intervention can be successfully scaled or used in different country contexts.