What’s the problem with making data-driven and evidence-based decisions to guide social policy? If the data and evidence are derived from randomized control trials (RCTs), then perhaps quite a lot.
In a recent Stanford Social Innovation Review article, Srik Gopal and Lisbeth B. Schorr make a compelling case that the uncritical application of the “Moneyball” ideal to social policy is a flawed approach that overlooks “the fundamental realities of how complex social change happens.”
To make their case, Gopal and Schorr challenge the view that evidence from RCTs — often regarded as the “gold standard” of evaluation methodology — is automatically more valuable than other types of evidence. In particular, they question the assumption that an effective intervention in one setting can be transferred to a new context with fidelity.
Despite their reservations about the usefulness of RCTs, they do not advocate abandoning the rigorous use of data altogether. Rather, in acknowledging that “complex problems demand complex interventions,” they call for an expanded methodological approach that 1) broadens the base of evidence, 2) focuses on principles of practice, and 3) embraces adaptive integration over fidelity.
May 25, 2016
Grantees of the Overdeck Family Foundation, sent to the 2016 Improvement Summit to gain perspective on a networked improvement approach, reflect on their experience at the event and share major takeaways for their ongoing work.
June 27, 2016
In a session on leading the transformation of large complex systems at the 2016 Carnegie Summit, three superintendents discuss how they shaped improvement in their school districts by adopting strategies that resonate with three of the principles of improvement science.