Power dynamics and research ethics

From a recent article on the nexus between rape and access to healthcare in eastern DRC, by Nicole D’Errico, Tshibangu Kalala, Louise Bashige Nzigire, Felicien Maisha and Luc Malemo Kalisya:

In public forums, Congolese people have also questioned whether or not they benefit from efforts made towards documenting gender-based violence. At one such event organised by patients in a hospital in Goma in June 2010, one survivor of rape stated, ‘[Researchers] say they can’t pay us [for research] because that would be unethical, but they take our dignity for free. They are paid to come here to talk to us but we get nothing!’ Many listeners agreed, with this speaker and a subsequent speaker asked whether or not foreign professors are paid to teach classes based on the knowledge gained from visits to the DRC, and suggested that such payments should be shared with their informants. (p. 53)

This is a huge issue in thinking about the ethics of research in low-income countries, particularly (but not exclusively) as a researcher from a different country.   Western academics are frequently reminded to strive for neutrality – to not to let personal opinions get bound up in their projects, to not pay interview subjects lest they create incentives to participate and bias their samples – but I think we’re often so focused on this that we lose sight of the power dynamics that are also inherently part of the research process.

So what can be done about this?  At a minimum, researchers ought to be compensating their respondents for the time it took to participate in the interview.  Sharing the completed research with respondents is also best practice, although I imagine this might have been cold comfort to the speaker quoted above.  I also respect the work that IPA is doing in disseminating results to government agencies and NGOs in the countries where it works, like this education conference in Burkina Faso, and this savings & payments conference in Uganda.  Other researchers I know have spent some time working as lecturers in universities in the countries where they work.  These ideas don’t address all of the speaker’s concerns, but they’re a useful step towards ensuring that research results aren’t locked in gated journals in the academic’s home country.

What are some other ways researchers can make sure they’re not simply taking from their respondents without giving back?

Survey software for mixed methods research

I attended a great course last month on mixed methods evaluation techniques for humanitarian programs at the Harvard Humanitarian Academy.  One of the most useful things I got out of the course (aside from a copy of this primer on mixed methods research designs) was a stronger sense of the types of survey software that are available for mixed-methods research.

The star of the show was definitely KoBoToolbox.  This free software was developed in partnership with researchers at the Harvard Humanitarian Initiative, and consequently is well suited to research in places without steady electricity or internet access.  We played around with the online form builder, which was incredibly easy to use, but surveys can also be designed and deployed to Android devices offline.  Once data has been collected, it can be synced to a local computer running any operating system.  The software also has some very useful functionalities beyond standard survey design, like collection of geospatial data and an option for integrating audio recordings into quantitative questionnaires.  The latter makes it a useful tool for organizing qualitative interviews – you could create a form to automatically track the date and location of the interview, and add other meta questions at the end (like the presence of other people, or whether the respondent seemed comfortable with the questions).

KoBo is one of a number of survey softwares built on Google’s workhorse program Open Data Kit.  ODK is free and open-source (Android only), with many of the same functions as KoBo, but according to other participants in the HHA course, the survey builder isn’t as easy to use.  Other paid services which are also built on ODK include SurveyCTO (which is used by IPA) and Enketo.  I haven’t looked into these options as much, but I believe they offer assistance with tech support and possibly database management.  SurveyCTO is also Android-based, while Enketo is platform independent.

The other two softwares in use at IPA are Surveybe and Blaise.  These are both paid, Windows-based services.  Surveybe sounds like it’s pretty similar to the ODK-based programs above, in terms of ease of programming.  Blaise is the heavy hitter of the survey software world.  There’s a very steep learning curve to the programming, but it’s capable of handling more complex survey designs than any of the others here.  (For example, the first project I worked on with IPA used Blaise to preload baseline data on farmers’ fields and crops into the midline questionnaire.  I’m pretty sure none of the other programs here could do that.)

Finally, hardware.  Everyone I’ve spoken to who’s deployed any of the Android-based programs has used Samsung Galaxy tablets for it.  I’ve got the 7″ version, which is quite portable but still large enough to comfortably type on.  The battery life is also good; it can be used for at least eight hours straight without charging.  When I was doing some consulting for a mixed methods evaluation in the DRC earlier this summer, we planned to send the survey teams out with these tablets and 6-watt solar chargers from Voltaic.  The other interesting hardware recommendation that came out of the HHA course was the Livescribe recording pen, which is a functional pen with an audio recorder inside.  A bit specialized for most researchers’ purposes, I think, but the course leader recommended it for qualitative interviews where the presence of a more obvious recording device might make people uncomfortable.  (No comment on its suitability for surreptitiously recording politicians doing shady things.)

RCTs and the democracy of the poor

If you’re even mildly interested in RCTs for international development, you’ve probably seen Lant Pritchett’s post on development as a faith-based activity by now, as well as Chris Blattman’s insightful reply.  I had an interesting conversation about this with Michael Clemens, Gabriel Demombynes, and Rohit Naimpally on Twitter today, which was useful in helping to parse Lant’s views more closely.  (Storified in case anyone would like to read along).  But what this discussion really made me think further about was the way in which RCT results have become a privileged type of knowledge in development.  I’m still a big supporter of using RCTs to compare the effectiveness of different development programs, but the point remains that this type of information is largely produced by academics in high-income countries, for major aid donors from high-income countries.  And I think this raises some major questions of voice and agency in international development that don’t usually come up in discussions about whether RCTs are worthwhile.

As an example of the latter debate, Evidence Matters had a thoughtful post recently about “how much evidence is enough.”  They made the excellent point that even well-conducted studies aren’t generalizable on their own, and that replication and systematic reviews should be the minimum standard for claiming to have verified the impact of a development program.  Of course, even great results from a worldwide replication aren’t sufficient to ensure that policymakers actually pay attention to them, and hence we also have people like Heather Lanthorn and Suvojit Chattopadhyay thinking critically about how policymakers work, and when evidence is likely to get used.

All really good stuff, which, if done well, should ideally increase the supply of effective development programs in the world.  And yet, whose voices come out in this?  Comments from individual users of development programs rarely make their way into quantitatively-oriented RCT results (I see signs that this is starting to change, but still very slowly).  And if they do get to voice their opinions, those users – whether favela residents in Sao Paulo or smallholder farmers in Mali – don’t effectively have any say on whether the program is continued, or whether it was remotely close to the type of program they wanted for their town in the first place.  Working towards program effectiveness via RCTs is very useful, and it generally doesn’t touch on these political questions about whether impoverished people get to make these important decisions about their own lives in the first place.  (I am using “and” as the conjunction here instead of “but” quite purposefully.  I think both facts are true; they don’t cancel each other out.)

There’s obviously no easy way to empower everybody and bring truly inclusive democracy to the people who systemically get excluded in every country – the poor – in the short term.  And even in an inclusive democracy, there would still be a great place for RCTs, because there will always be questions about which design of a social program is more effective.  But I think development practitioners, and especially randomistas, need to think much more critically about making sure that the push for evidence doesn’t displace opportunities for citizens of low-income countries to have a real say about the type of “development” they’re participating in.

Creating meaningful narratives for policymakers

Anyone who’s interested in doing policy-relevant research knows that making your findings accessible to information-overloaded policymakers is a challenge.  Duncan Green has written a good summary of a recent paper by Paul Avey and Michael Desch on this topic.  To further summarize Duncan’s points:

  • The more politicians know about a subject, the less they believe “experts”
  • Public visibility (including social media and blogging) is important for credibility
  • However, most policymakers still prefer to get information from major newspapers rather than more specialized (but possibly less credible) online sources
  • The best narrative, and not the best evidence, will win

The takeaway?  “Tell clearer, shorter stories and you may actually be listened to.”

(I also wrote about some of Avey & Desch’s work a few months ago, focusing on the types of academic work that policymakers felt most accessible.)

Do PDAs reduce transcription error compared to paper surveys?

This came up on IPA’s research methods discussion list recently, and Nate Barker provided links to two papers suggesting that PDAs do help surveyors capture given answers correctly.  They’re both from a few years ago, so with improvements in hardware and survey softwares the advantage should be even greater today.  Blaya, Cohen, Rodriguez, Kim & Fraser (2008) compare PDAs to paper surveys in Peru, and a World Bank working paper from 2011 compares mobile phones to paper in Guatemala.  (From the comments, Giacomo Zanello also suggests this 2012 paper by Fafchamps, McKenzie, Quinn & Woodruff as well.)

What do American policymakers want from academics?

Paul Avey and Michael Desch had an interesting post on this question at the Monkey Cage a few weeks ago.  While the authors focused on American policymakers, I suspect that these findings are generalizable to policymakers outside of the US, and, on a slightly different set of topics, to managers at development NGOs as well.   The graph of their findings is striking (click to enlarge):

Desch1

The categories with a clear preponderance of “very” or “somewhat” useful results are area studies, case studies and policy analysis.  Respondents appeared more divided over quantitative and theoretical analysis and operations research, but still generally favorable.  The only category to receive majority unfavorable responses was formal modeling.

Note that the favorability of these approaches increases linearly with the amount of context and detail they tend to provide.  Formal modeling is based on the idea that a set of simplified yet powerful assumptions about human nature can yield predictions about behavior which would apply to any actor in the same situation, regardless of context.  This is about as far as it gets from the types of qualitative, richly detailed works which often show up in area studies or policy analysis.

The point I took away  was not that formal modeling is useless, but that research which provides detailed, contextualized descriptions of the problem at hand is more likely to be accessible to policymakers.  Barbara Walter’s book on the use of third parties to enforce civil war settlements is a great example of a work which uses formal modeling to derive its conclusions, but then highlights their policy relevance with a series of case studies.  It’s clearly not the case that policy-oriented research should sacrifice rigor, but rather that even the most rigorous research isn’t worth much if practitioners can’t understand it.

That said, even research which does not immediately appear to have policy implications can turn out to be useful in the long run.  Walter’s work was based on research like Bob Powell’s article on war as a commitment problem, which is a heavily mathematical study of “the inefficiency puzzle in the context of complete-information games” (p. 195).  Sounds about as far removed as possible from the messy real world, no?  And yet, while the policy implications of Powell’s article may not have been clear to practitioners, later researchers were able to build on it to make well-informed policy recommendations.  It’s the political science version of developing an incredible adhesive from biomechanical studies of gecko feet.