School Choice Boosts Test Scores

Private school choice remains a controversial education reform. Choice programs, involving school vouchers, tax-credit scholarships, or Education Savings Accounts (ESAs), provide financial support to families who wish to access private schooling for their child. Once declared dead in the U.S. by professional commentators such as Diane Ravitch and Greg Anrig, there are now 50 private school choice programs in 26 states plus the District of Columbia. Well over half of the initiatives have been enacted in the past five years. Private school choice is all the rage.

ednext-blog-may16-wolf-dc-threatBut does it work? M. Danish Shakeel, Kaitlin Anderson, and I just released a meta-analysis of 19 “gold standard” experimental evaluations of the test-score effects of private school choice programs around the world. The sum of the reliable evidence indicates that, on average, private school choice increases the reading scores of choice users by about 0.27 standard deviations and their math scores by 0.15 standard deviations. These are highly statistically significant, educationally meaningful achievement gains of several months of additional learning from school choice. The achievement benefits of private school choice appear to be somewhat larger for programs in developing countries than for those in the U.S. Publicly-funded programs produce larger test-score gains than privately-funded ones.

The clarity of the results from our statistical meta-analysis contrasts with the fog of dispute that often surrounds discussions of the effectiveness of private school choice. Why does our summing of the evidence identify school choice as a clear success while others have claimed that it is a failure (see here and here)? Three factors have contributed to the muddled view regarding the effectiveness of school choice: ideology, the limitations of individual studies, and flawed prior reviews of the evidence.

School choice programs support parents who want access to private schooling for their child. Some people are ideologically opposed to such programs, regardless of the effects of school choice. Other people have avested interest in the public school system and resist the competition for students and funds that comes with private school choice. No amount of evidence is going to change their opinion that school choice is bad.

A second source of disputes over the effectiveness of choice are the limits of each individual empirical study of school choice. Some are non-experimental and can’t entirely rule out selection bias as a factor in their results (see here, and here). Fortunately, over the past 20 years, some education researchers have been able to use experimental methods to evaluate privately- and publicly-funded private school choice programs. Experimental evaluations take the complete population of students who are eligible for a choice program and motivated to use it, then employ a lottery to randomly assign some students to receive a school-choice voucher or scholarship and the rest to serve in the experimental control group. Since only random chance, and not parental motivation, determines who gets private school choice and who doesn’t, gold standard experimental evaluations produce the most reliable evidence regarding the effectiveness of choice programs. We limit our meta-analysis to the 19 gold standard studies of private school choice programs globally.

Each of the gold standard studies, in isolation, has certain limitations. In the experimental evaluation of the initial DC Opportunity Scholarship Program that I led from 2004 to 2011, the number of students in testing grades dropped substantially from year 3 to year 4, leading to a much noisier estimate of the reading impacts of the program, which were positive but just missed being statistically significant with 95% confidence. Two experimental studies of the Charlotte privately-funded scholarship program, here and here, reported clear positive effects on student test scores but were limited to just a single year after random assignment. Two recent experimental evaluations of the Louisiana Scholarship Program found negative effects of the program on student test scores but one study was limited to just a single year of outcome data and the second one (which I am leading) has only analyzed two years of outcome data so far. The Louisiana program, and the state itself, are unique in certain ways, as are many of the programs and locations that have been evaluated. What are we to conclude from any of these individual studies?

Meta-analysis is an ideal approach to identifying the common effect of a policy when many rigorous but small and particular empirical studies vary in their individual conclusions. It is a systematic and scientific way to summarize what we know about the effectiveness of a program like private school choice. The sum of the evidence points to positive achievement effects of choice.

Finally, most of the previous reviews of the evidence on school choice have generated more fog than light, mainly because they have been arbitrary or incomplete in their selection of studies to review. The most commonly cited school choice review, by economists Cecilia Rouse and Lisa Barrow, declares that it will focus on the evidence from existing experimental studies but then leaves out four such studies (three of which reported positive choice effects) and includes one study that was non-experimental (and found no significant effect of choice). A more recent summary, by Epple, Romano, and Urquiola, selectively included only 48% of the empirical private school choice studies available in the research literature. Greg Forster’s Win-Win report from 2013 is a welcome exception and gets the award for the school choice review closest to covering all of the studies that fit his inclusion criteria – 93.3%. (Greg for the win!)

Our meta-analysis avoided all three factors that have muddied the waters on the test-score effects of private school choice. It is a non-ideological scientific enterprise, as we followed strict meta-analytic principles such as including every experimental evaluation of choice produced to date, anywhere in the world. Our study was accepted for presentation at competitive scientific conferences including those of the Society for Research on Education Effectiveness, the Association for Education Finance and Policy, and the Association for Policy Analysis and Management. Our study is not limited by small sample sizes or only a few years of outcome data. It is informed by all the evidence from all the gold standard studies. Finally, there is nothing arbitrary or selective in our sample of experimental evaluations. We included all of them, regardless of their findings. When you do the math, students achieve more when they have access to private school choice.

— Patrick J. Wolf

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College