What is it about?

Online education is rapidly expanding in response to rising demand for higher and continuing education, but many online students struggle to achieve their educational goals. Several behavioral science interventions have shown promise in raising student persistence and completion rates in a handful of courses, but evidence of their effectiveness across diverse educational contexts is limited. In this study, we test a set of established interventions over 2.5 y, with one-quarter million students, from nearly every country, across 247 online courses offered by Harvard, the Massachusetts Institute of Technology, and Stanford. We hypothesized that the interventions would produce medium-to-large effects as in prior studies, but this is not supported by our results. Scaling behavioral science interventions across various online learning contexts can reduce their average effectiveness by an order-of-magnitude. However, iterative scientific investigations can uncover what works where for whom.

Featured Image

Why is it important?

Low persistence in educational programs is a major obstacle to social mobility. Scientists have proposed many scalable interventions to support students learning online. In one of the largest international field experiments in education, we iteratively tested established behavioral science interventions and found small benefits depending on individual and contextual characteristics. Forecasting intervention efficacy using state-of-the-art methods yields limited improvements. Online education provides unprecedented access to learning opportunities, as evidenced by its role during the 2020 coronavirus pandemic, but adequately supporting diverse students will require more than a light-touch intervention. Our findings encourage funding agencies and researchers conducting large-scale field trials to consider dynamic investigations to uncover and design for contextual heterogeneity to complement static investigations of overall effects.


This study makes three important contributions.First, the authors have previously published studies that suggested that pre-course survey prompts could substantially help learners meet their goals and persist through courses. We now revise those expectations to be much more modest; if our interventions help, they only do so a little (which might still be OK, because they are nearly free!). Second, our evidence suggests that the course context matters... an intervention that might work in one context might not work in another. This is a very thorny challenge for education technology design and pedagogy--scale is hard to achieve if every context is different. From a policy perspective, perhaps funding agencies should be more cautious about efforts to "scale up what works" and more interested in identifying effective ways to press for local adaptations.

Justin Reich
Massachusetts Institute of Technology

Read the Original

This page is a summary of: Scaling up behavioral science interventions in online education, Proceedings of the National Academy of Sciences, June 2020, Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1921417117.
You can read the full text:



The following have contributed to this page