RCTs and the democracy of the poor

If you’re even mildly interested in RCTs for international development, you’ve probably seen Lant Pritchett’s post on development as a faith-based activity by now, as well as Chris Blattman’s insightful reply.  I had an interesting conversation about this with Michael Clemens, Gabriel Demombynes, and Rohit Naimpally on Twitter today, which was useful in helping to parse Lant’s views more closely.  (Storified in case anyone would like to read along).  But what this discussion really made me think further about was the way in which RCT results have become a privileged type of knowledge in development.  I’m still a big supporter of using RCTs to compare the effectiveness of different development programs, but the point remains that this type of information is largely produced by academics in high-income countries, for major aid donors from high-income countries.  And I think this raises some major questions of voice and agency in international development that don’t usually come up in discussions about whether RCTs are worthwhile.

As an example of the latter debate, Evidence Matters had a thoughtful post recently about “how much evidence is enough.”  They made the excellent point that even well-conducted studies aren’t generalizable on their own, and that replication and systematic reviews should be the minimum standard for claiming to have verified the impact of a development program.  Of course, even great results from a worldwide replication aren’t sufficient to ensure that policymakers actually pay attention to them, and hence we also have people like Heather Lanthorn and Suvojit Chattopadhyay thinking critically about how policymakers work, and when evidence is likely to get used.

All really good stuff, which, if done well, should ideally increase the supply of effective development programs in the world.  And yet, whose voices come out in this?  Comments from individual users of development programs rarely make their way into quantitatively-oriented RCT results.  And if they do get to voice their opinions, those users – whether favela residents in Sao Paulo or smallholder farmers in Mali – don’t effectively have any say on whether the program is continued, or whether it was remotely close to the type of program they wanted for their town in the first place.  Working towards program effectiveness via RCTs is very useful, and it generally doesn’t touch on these political questions about whether impoverished people get to make these important decisions about their own lives in the first place.  (I am using “and” as the conjunction here instead of “but” quite purposefully.  I think both facts are true; they don’t cancel each other out.)

There’s obviously no easy way to empower everybody and bring truly inclusive democracy to the people who systemically get excluded in every country – the poor – in the short term.  And even in an inclusive democracy, there would still be a great place for RCTs, because there will always be questions about which design of a social program is more effective.  But I think development practitioners, and especially randomistas, need to think much more critically about making sure that the push for evidence doesn’t displace opportunities for citizens of low-income countries to have a real say about the type of “development” they’re participating in.

7 thoughts on “RCTs and the democracy of the poor

  1. Rachel, thanks for good conversation fodder – as usual. To echo what has been said, I’d like to re-highlight that RCTs have become a privileged type of knowledge. Among whom and to what end? Among certain researchers? For sure. Among policymakers? Kinda. Insofar as they feel that they are supposed to ask for an impact evaluation, even if they aren’t totally sure what to do with it. But I don’t see why anyone in the field is satisfied with that.

    I’d also like to highlight the smart language you use in describing the productive role of RCTs: to compare the effectiveness of different development programs. Usefully set-up RCTs do that: they look at X versus Y variants of a program and figure out a way forward. RCTs that ask X versus not-X often aren’t going to be politically or practically useful (and the more “X” is an entitlement program, the less useful the result becomes, assuming that the result could be in favor of X or not-X).

    Suvojit points out that questionnaires technically gather program participant opinions. I think where that intersects with your broader point – and where such questionnaires often fail – is whether or not those participants had any role in defining what a “successful” program would look like (i.e. what questions they would be asked at the end). It’s not even clear if decision/policy-makers have a chance to play that role. I think this is a serious weakness in an awful lot of research… and I am frankly tired of getting to the end of a lot of articles only to ask “so what?” and “who cares?”

    Big ups to Ken’s call for infrastructure and enabling environments — I hope we start seeing more of this (recognizing that this partially loops us back to the foci of development in the 1950s and 1960s and that it needs to be done better/differently this time around).

    I do believe there is probably scope within even these ‘big’ programs for RCTs or IEs: the best ways to incentivize infrastructure builders, for example, or to make sure that increasing primary school enrollment is translating into solid job opportunities later. It’s about using RCTs to ask they questions they can answer and questions that matter in the sense of being linked to real decisions that need to be made…

    Hope the weather is getting better in Berkeley. Windows open in Delhi and so, so happy about it.


  2. Technically, surveys gather the opinion of programme participants. It gets their feedback, perception and data on whether their lives changed as a result of the intervention. Should results be taken back to them for their feedback? Of course. Is that done? Hardly ever.

    The other problem is that of mis-selling: when RCTs are sold to policymakers, the promise is that of reliable evidence; think we need to do much more on policymaker education

    Third: adherence to water-tight methodologies – randomistas do not fully understand or appreciate micro-level qualitative research, let alone those that work on macro issues

    Fourth: I feel sorry for the development economist who is today expected to be an expert on governance, education, health, microfinance, employment, conflict, culture – all at the same time.


    1. All very good points! And I think #3 makes it especially hard to do #2 – how can we educate policymakers when we’re still working to understand the strengths and limits of the methodology ourselves?


  3. The number of households turned into a “small business” can be measured. How much of an enabling environment you’ve created can’t, and so a lot of folks won’t do it if they can’t measure it. Everybody wants metrics now, you apparently can’t do anything without them even though we spent the entire history of the human race without measuring/justifying everything we did. Like the author mentioned, I want to see the “trial” where a bunch of people go out and ask the recipients about what or is not working. It would be a miracle of measurement called: “Asking Reasonable Questions to those who Actually Receive your Program”.


  4. Great take on this subject. I like RCTs, but I think their usefulness to developing country governments is limited to specific public policy areas. I think works on public health initiatives are the best. Followed by those on agricultural productivity improvements. Most other stuff tends to fall into the category of “pro-poor” initiatives that don’t hold any transformative promise and at best would only keep people a bit more comfortable in their poverty. And as you imply in your post, most of the output would only be relevant to Western aid agencies.

    It is unfortunate that the most brilliant minds in the development economics still shy away from the macro-stuff – how to build efficient infrastructure and create mass employment. Perhaps it is because they need to do both development work and publish in top journals. And RCTs are the best way to get yourself published in AER or the QJE.

    When all is said and done I think that randomistas should be up front about the limitations of the approach. Turning every poor household into a small business is not a credible national development policy. The real BIG question in development is how to build infrastructure and create an enabling environment for mass job creation through industrialization and the development of large firms.


    1. Totally agreed! It is really interesting that questions about how to carry out macroeconomic reform seem to be out of fashion right now (perhaps as a backlash to the shortcomings of structural adjustment), while “good governance” projects are often vaguely defined and shy away from the often fraught political realities of low-income countries. It seems like a lot of today’s Western academics and development practitioners are caught trying to balance the promotion of effective development programs with an avoidance of overtly neocolonial meddling in macro-level policies. I also support both of these goals, but I’m with you that this leaves people in a bit of a bind if they feel that macroeconomic and governance reforms are in fact the best way to sustained economic growth.


Comments are closed.