Heather Lanthorn recently wrote a great post about defining “policy relevant evaluation” that really pushed me on my priors on this concept. As she points out:
just because research is conducted on policy does not automatically make it ‘policy relevant’ — or, more specifically, decision-relevant. it is, indeed, ‘policy adjacent,’ by walking and working alongside a real, live policy to do empirical work and answer interesting questions about whether and why that policy brought about the intended results. but this does not necessarily make it relevant to policymakers and stakeholders trying to make prioritization, programmatic, or policy decisions. in fact, by this point, it may be politically and operationally hard to make major changes to the program or policy, regardless of the evaluation outcome. …
jeff hammer has pointed out that even though researchers in some form of applied work on development are increasingly doing work on ‘real’ policies and programs, they are not necessarily in a better position to help high-level policymakers choose the best way forward. this needs to be taken seriously, though it is not surprising that a chief minister is asking over-arching allocative questions (invest in transport or infrastructure?) whereas researchers may work with lower-level bureaucrats and NGO managers or even street-level/front-line workers, who have more modest goals of improving workings and (cost-)effectiveness of an existing program or trying something new.
I think this is a great step towards an acknowledgement that different types of research will be useful to policymakers at different levels of government and with different policy goals. Most of the RCTs I’ve seen operate within a fairly narrow set of parameters that correlate to the types of programming decisions made by senior managers at social welfare ministries, like health or education. There’s a specific policy goal that someone wants to achieve (improving primary school children’s reading performance), a known segment of the population targeted by the policy (children ages 5 – 16 currently enrolled in school), and a strong sense of the limits of the type of solution that can be proposed, particularly financially (we can afford one hour of tutoring per day by a literate adult, but can’t build fully equipped libraries in every town). Within these parameters, RCTs can be a great way to evaluate the effectiveness of different types of programs that might meet this policy goal.
That said, if you change any of the parameters, RCTs are often no longer efficient way to make programming decisions. Outside of social welfare ministries, many important policy choices either can’t be randomized (providing military support to an ally, deciding whether to invest in nuclear power) or don’t need to be (it’s already quite well-documented that expansionary monetary policy leads to inflation). As Heather noted, RCTs frequently can’t offer much guidance to policymakers making the inherently political choice between different policy goals. And they often don’t generate new insights effectively when the underlying process that produces a social problem, and the particular segments of the population affected by this process, aren’t known.
This is especially visible in recent RCTs examining the effects of institution-building after civil war. While people frequently speculate that the combination of poverty, inequality, and unemployed young people increases the risk of civil war, the majority of countries fitting this description don’t ever experience war. And even among those which do, the question of why some people choose to rebel and what can be done to prevent these people or similar ones from fighting again in the future is basically unanswered. Virtually every country caught up in civil war has a large population of poor, politically excluded young people, but only a tiny minority of those people will ever join a rebellion, making it very difficult to figure out how to target programs aimed at reducing the likelihood of future conflict.
The point here isn’t that RCTs are useless, but that “policy relevant research” might take very different forms depending on the type of question being answered and the underlying knowledge base on the issue.