Over the past two weeks, I have had the opportunity to travel across Liberia for two very different projects: the World Bank-funded Youth Opportunities Project, which gives small business grants to youths in Greater Monrovia, and the USAID-funded Give Directly Project, which gives unconditional cash transfers to farmers in rural Nimba and Bong counties north of Monrovia. In both cases, my host organization, Innovations for Poverty Action, is conducting randomized-controlled trials of the programs to determine their effects on a range of health, education, and consumption outcomes.
A randomized-controlled trial, of course, requires both a treatment and control group in order to assess the impact of the program. In other words, a key feature of this form of impact evaluation is that one group of individuals does not receive “treatment,” or the program benefits. In theory, this study design is perfectly rational: in the absence of a control group, policymakers and evaluators would not be able to discern the causal impact of the program, making it challenging, if not impossible, to determine whether the program is truly effective in changing the outcomes of interest. Yet, while working in the field, the reality of randomization can be somewhat more difficult to swallow.
I first experienced these discomforts while working in the field for the Youth Opportunities Project. To allocate its grants in a random manner, the YOP project was conducting live lottery events in communities across Monrovia. My role as a member of the Innovations for Poverty Action team was to ensure that these lotteries were being conducted in a manner that was transparent and adhered to data quality protocols. Our team did everything—from placing lottery tickets in sealed buckets to having young children from the communities draw the tickets—to ensure that the community members saw the lottery as fair. But of course, at the end of the day, not all of the participants who entered the lottery were chosen, and many appeared deeply disappointed by this fact. While I respected the need to keep some participants in the control group for research purposes, it was difficult to look these participants in the eye and explain to them that they would not be receiving a potentially life-changing grant.
Later in the week, I had the opportunity to travel to rural Nimba county with the Give Directly research team. While there, we visited several treatment and control villages to get on-the-ground insights into the program, local farmers’ lives, and the broader context of the region. At the end of one particularly lively conversation with a woman in a control village, we were asked, “Now that we’ve answered all of your questions, what are you giving us in return?” While completely reasonable, the question took us aback. Although true, a response along the lines of, “Getting more information about your life so that your government can formulate better policy!” isn’t particularly satisfying to a poor farmer concerned about feeding her family, maintaining her farm, and sending her children to school. Yet in reality, there wasn’t much more we could give her. As a member of the control group, this woman and many others like her were asked to answer our lengthy interviews and surveys—but would not receive much in return. It is difficult to communicate to a single individual the macro-level impacts of their actions—to tell them, in easy-to-understand terms, that their voice and experiences matter for a broader purpose, even if they won’t see the benefits in their day-to-day lives.
I finished this week in a reflective mood, thinking about these interactions with control group participants who, despite being just as deserving, do not receive any benefits for the purpose of research. These concerns are not new, and some scholars have gone as far as to propose the principle of “no survey without service,” suggesting that control groups should receive benefits beyond “policy learning,” provided that these benefits do not confound any inference of causal program impact (Orsin et al 2008). Of course, in most cases, it would likely be extremely difficult to find a service that can be provided to control groups without significantly confounding the study results.
Ultimately, I am ending my summer with a strong belief in the power of randomized-controlled trials. Despite their challenges, I still firmly believe that this type of experimentation is a critical means of conducting rigorous impact evaluations and a crucial tool in any policymaker’s arsenal for making evidence-based decisions. Moreover, in the face of limited funding and program capacity, randomized allocation is often the only fair way to allocate benefits across a population. Yet, I did come away from my field visits with a reaffirmed desire to continually reflect upon and improve the ways in which researchers can meaningfully share back results to the communities being studied. Policymakers and researchers must strive to be creative in the ways they bring benefits to all individuals involved in a study, regardless of the group in which they are placed.