Given my background in development economics and political science, it’s no surprise that I’m excited by the work that Evidence Action does to translate rigorous economic research into policy implementation. Karen Levy and Varna Sri Raman recently published a remarkably frank blog post discussing the challenges they faced when scaling up an anti-poverty program in Bangladesh after a successful pilot. The post stood out to me not only for its honesty about the difficulties of implementing at scale, but also for the amount of thought that EA and its implementing partner put into diagnosing and correcting the problems at hand.
The intervention at hand was the “No Lean Season” program. In a pilot project, Gharad Bryan, Shyamal Chowdhury, and Mushfiq Mobarak gave rural residents small subsidies to temporarily migrate to cities to look for work during the hungry season before annual harvests. They found that this substantially increased consumption in the sending households. It’s a clever response to the shortage of non-farm employment opportunities in rural areas, and also demonstrates how even small costs can prevent people from accessing better-paid opportunities elsewhere.
EA’s Beta Incubator subsequently worked with a Bangladeshi NGO to expand the subsidy program from about 5000 households per year up to 40,000. It was switched from a pure subsidy to a loan in the process However, they found that the NGO employees who were supposed to deliver the loans handed out fewer than expected. In addition, the loans didn’t seem to have the same effect, as recipients didn’t seem much more likely to migrate than a comparison group which didn’t receive any money.
The section of the EA post that’s really worth reading is the analysis of why the scaling didn’t go according to plan. It stood out to me for its use of both qualitative and quantitative methods to better understand the newly scaled-up context in which the program operated, and the internal operations decisions of its partner NGO. Among the salient points, they found that the program had been expanded into new districts which had much higher baseline rates of migration than the district in which it was piloted. A miscommunication with the NGO also meant that employees had performance targets set for the number of loans to disburse which were lower than the program actually required.
This is arguably the best example I’ve ever seen of why questions about the external validity of social policy RCTs are beside the point. Any program has to be adapted to its local context — and that context can vary significantly even at different scales of implementation, or between different districts in the same country.