Do PDAs reduce transcription error compared to paper surveys?

This came up on IPA’s research methods discussion list recently, and Nate Barker provided links to two papers suggesting that PDAs do help surveyors capture given answers correctly.  They’re both from a few years ago, so with improvements in hardware and survey softwares the advantage should be even greater today.  Blaya, Cohen, Rodriguez, Kim & Fraser (2008) compare PDAs to paper surveys in Peru, and a World Bank working paper from 2011 compares mobile phones to paper in Guatemala.  (From the comments, Giacomo Zanello also suggests this 2012 paper by Fafchamps, McKenzie, Quinn & Woodruff as well.)

What do American policymakers want from academics?

Paul Avey and Michael Desch had an interesting post on this question at the Monkey Cage a few weeks ago.  While the authors focused on American policymakers, I suspect that these findings are generalizable to policymakers outside of the US, and, on a slightly different set of topics, to managers at development NGOs as well.   The graph of their findings is striking (click to enlarge):

Desch1

The categories with a clear preponderance of “very” or “somewhat” useful results are area studies, case studies and policy analysis.  Respondents appeared more divided over quantitative and theoretical analysis and operations research, but still generally favorable.  The only category to receive majority unfavorable responses was formal modeling.

Note that the favorability of these approaches increases linearly with the amount of context and detail they tend to provide.  Formal modeling is based on the idea that a set of simplified yet powerful assumptions about human nature can yield predictions about behavior which would apply to any actor in the same situation, regardless of context.  This is about as far as it gets from the types of qualitative, richly detailed works which often show up in area studies or policy analysis.

The point I took away  was not that formal modeling is useless, but that research which provides detailed, contextualized descriptions of the problem at hand is more likely to be accessible to policymakers.  Barbara Walter’s book on the use of third parties to enforce civil war settlements is a great example of a work which uses formal modeling to derive its conclusions, but then highlights their policy relevance with a series of case studies.  It’s clearly not the case that policy-oriented research should sacrifice rigor, but rather that even the most rigorous research isn’t worth much if practitioners can’t understand it.

That said, even research which does not immediately appear to have policy implications can turn out to be useful in the long run.  Walter’s work was based on research like Bob Powell’s article on war as a commitment problem, which is a heavily mathematical study of “the inefficiency puzzle in the context of complete-information games” (p. 195).  Sounds about as far removed as possible from the messy real world, no?  And yet, while the policy implications of Powell’s article may not have been clear to practitioners, later researchers were able to build on it to make well-informed policy recommendations.  It’s the political science version of developing an incredible adhesive from biomechanical studies of gecko feet.

Contextualizing RCTs

The World Bank’s Development Impact blog had two great posts recently which touched on the idea of contextualizing RCTs.  David McKenzie, writing about clinical equipose in RCTs, says that it would be useful to do more experiments on targeting interventions, to understand how targeting needs might differ across countries.  And both he and Eva Vivalt are concerned that impact evaluations rarely include the costs of the intervention in published materials, or, better yet, compare the intervention to a cash transfer of equivalent value.  Development agencies who hope to implement a “proven” intervention will have more difficulty doing so if they can’t learn about the costs of implementation.  (I think this is why Ted Miguel & Michael Kremer’s paper on school-based deworming has gained so many eager implementers: they included a persuasive cost-benefit analysis.  Of course, it helps that deworming is very cheap in the first place.)

I would love to see more information on two additional topics: how an intervention was administered, and how the respondents themselves understand the effects that it had on their lives.  Most impact evaluations provide a solid overview of the program or policy being implemented, but don’t offer much other local context.  For example, what is the relationship of the implementing NGO/bank/government branch to the respondents?  Is it seen as favoring some community members over others, or as even-handed towards all?  Has it been active in the community for a long time, or is it a newcomer?  A new organization may not have earned much trust among the community.  At the same time, people may participate more often than they otherwise would if the intervention is offered by a well-established organization, in order not to offend them and possibly lose access to future services.*  It’s reasonable to think that respondents’ beliefs about the implementing organization may have an effect on their participation in an intervention, and it would be useful to understand more of this context, perhaps through key informant interviews.  (In fact, you could write a much longer list of contextual effects – whether there was a banking scandal recently, whether the last round of animal donations included sick goats which infected the rest of the town’s herd and then died, whether there was an especially good harvest the previous year which left everyone flush with cash, etc.)

I also think that there’s a great need to hear more from respondents themselves about how an intervention affected them.  As one of my former colleagues at IPA (I think it was Liz) pointed out to me recently, there’s no reason for RCTs to be constrained to quantitative data collection.  Doing more qualitative process tracing would be a useful way for researchers to examine whether they’re correctly identified the mechanisms underlying an observed social change, or whether respondents themselves perceive things differently.**  For instance, in a 2007 paper, Xavier Gine and Dean Yang found that farmers in Malawi were half as likely to take out a loan when it came bundled with a completely free crop insurance product.  Based on some suggestive correlations in the data, they believe this occurred because it was harder for less-educated farmers to evaluate the value of the crop insurance, and that this dissuaded them from accepting the loan.  It’s a reasonable explanation, but one can easily imagine how it might have been more informative to ask the farmers themselves to discuss their decisions.

* While every implementing organization should make clear that the decision to participate or decline participation will not affect eligibility for future benefits, I’m skeptical about whether people actually believe this.  If I had a rich aunt who was funding my studies, and she said, “I’d love it if you could reschedule those other plans for next week and come to my birthday party, but it’s quite all right if you can’t make it,” I would definitely think twice before skipping the party, even if I were completely convinced that she wasn’t about to defund me if I missed it.  In many areas of our lives, access to resources depends on maintaining good relationships with others.  I suspect that it’s hard to put this idea aside, even if an NGO clearly promises that future benefits don’t hinge on current behavior.  It would be interesting to test this somehow, although I can’t think of any ethical way to do so at the moment.

** There is, of course, a much larger set of questions about voice and agency in the practice of international development, which I think is beyond the scope of this post but should still be acknowledged.

Impact evaluation of conflict prevention programs

Jeannie Annan and Marie Gaarder have a recent paper out on using experimental and quasi-experimental methods to evaluate programs in countries which have experienced conflict (link, PDF). They review the methodological approaches of a number of recent post-conflict evaluations, and address the ethical implications of doing research in conflict zones. Their list of questions about the ethics and feasibility of such evaluations is very good:

(ii) Does the sample size factor in the potential for higher attrition due to potential security, issues, migration or ethical concerns?  …

(v) Is there a security protocol or guidelines for evaluation staff? Does evaluation staff fall under any organizational protection for security?

(vi) Who carries the legal responsibility for the risks taken? Have the researchers partnered with an organization able to bear the risks? …

(viii) Does the evaluation team have strong key informants who can provide thoughtful analysis about the security situation and the research implications at the design phase and throughout the evaluation?

Required reading for anyone who’s considering doing research in post-conflict countries.

Research snapshots

That said, my last post doesn’t particularly convey the sense that I like my job, which I very much do.  There are all these small human moments that account for that, that liking, such as the following.

  • Perhaps the most wondrous thing about field research is people’s grace in allowing strangers into their homes, their lives, to pose a series of questions whose purpose must surely seem cryptic to them.  (The surveyors do introduce themselves, and the purpose of the research, of course.  But I think it’s a far cry from those introductions to understanding the worlds of academic publishing, or [in the case of this study] insurance product design, that are the prime movers behind these surveys.)  And yet they do let them in.  They even let me in, when I am monitoring surveyors in the field, and I have been profoundly grateful for these chances to sit under respondents’ carefully thatched roofs and listen to snatches of their lives in my mediocre Dagbani.
  • It’s been great getting to know our surveyors.  The team leaders are just great – thoughtful, organized, and intelligent – and I’ve slowly moved past my initial monolithic impression of the larger survey team as “that group of 20 men (and one woman) who do a lightning strike on the office for their netbooks each morning” to individual interactions, individual personalities.  There’s L., who willingly took on additional work when his team leader fell ill, and D., who is perpetually flashing the friendliest smile at everyone, and many others.  They have been a fantastic group of people to work with.
  • And honestly, much of what’s enjoyable is a succession of small daily things.   The temporary cooling of buying cold Pure Water sachets on the way to the office and drinking them as quickly as possible.  Chasing chickens and small beautiful children out of the open door of the Walewale office, somewhat halfheartedly, because they think it’s a game to come into the office and get chased, and I enjoy the break.  An unexpected frog hopping out of a backpack containing soil samples and into my hands, to be set free outside.  All of these, perfect pleasant diversions from a job that is at times overwhelmingly busy, but always worthwhile.

Where I’ve been

A random sample of a different sort

There is a very strong correlation between my returning to Africa and my completely neglecting this blog – which says less about African internet than about how busy I always find myself when I’m here!  I came into my current position with IPA at the beginning of a two-month household survey examining underinvestment in agriculture in northern Ghana, and since then our whole team has been working non-stop.  Our surveyors leave between 7 and 8 am every day, so I’m usually at the office by 6.30 to make sure that everything’s prepared.  Then it’s a long day of tracking survey documents, sorting soil samples, assigning survey teams to new communities, preparing per diem payments, troubleshooting the netbooks & survey software, selecting respondents for audits, taking calls from surveyors, and making frequent three-hour round trips up to our satellite office in Walewale, among any number of other things.  An early day might end at 7 pm, and a late one at 10 pm.  The sheer amount of work has forced me to grow more as a manager than I have in any other position I’ve yet had, which has been fantastic.  It simply doesn’t leave much space at the edges of my days for anything else.