Comparing Public Schools to Private

Lubienskis’ conclusions rely on flawed research design

CHECKED: Christopher A. and Sarah Theule Lubienski, The Public School Advantage: Why Public Schools Outperform Private Schools (University of Chicago Press, 2013)

ednext_XIV_3_ctf_img01Checked by Patrick J. Wolf

When The Public School Advantage hit the shelves, critics of private school choice were elated. The Lubienskis, whose prior research has been highly critical of school choice, had employed the tools of social science to make a bold claim: if one controls for the characteristics of students who attend them, public schools “outperform” private schools. Finally, defenders of the public school establishment could martial hard evidence in their drive to halt school voucher programs.

What are we to make of this? Research on this question goes back some 30 years. From James Coleman’s early observational studies of high schools to the experimental voucher evaluations of the past 15 years, researchers have routinely found that similar students do at least as well and, at times, better academically in private schools than in public schools. How have the Lubienskis come up with this surprising finding?

Four methodological choices likely account for their discovery:

• a narrow definition of school performance

• use of tests that align more closely with public school than with private school curricula

• the use of control variables for student characteristics that are measured differently across school sectors

• the improper handling of students who switch sectors.

First, the researchers use a very limited definition of school performance. The Lubienskis compare public and private schools solely on the basis of student performance in math, even though their data come from the 2003 administration of the National Assessment of Educational Progress (NAEP) and the Early Childhood Longitudinal Study Kindergarten Class of 1998–99 (ECLS-K), both of which include reading as well as math scores. The authors justify their omission of half of the available evidence by claiming that math performance is a better measure of school performance than reading, since students likely get nearly all of their math instruction in school but much of their reading instruction at home. But all previous evaluations of the effects of private schools or of school voucher programs reported test-score results for both reading and math, or a composite measure of the two, even if the researchers thought that one or the other was a better measure of school performance. The fact that these authors failed to follow a standard research convention is curious and frustrating.

More complete treatments of the relative performance of private and public schools nationally are available from other researchers. Two separate analyses of the NAEP data using methods similar to that employed by the Lubienskis—one by researchers Henry Braun, Frank Jenkins, and Wendy Grigg at the National Center for Education Statistics and another by Paul Peterson and Elena Llaudet at Harvard University—reported significantly higher reading scores for private school students, even after controlling for individual and school-level student demographics. A separate study of the ECLS-K data, also by Peterson and Llaudet, similarly showed that private school students gained significantly more in reading achievement than demographically similar public school students in schools with similar student populations.

Standardized test scores, moreover, capture a small sliver of what we expect schools to deliver for students. A dozen or more prominent education researchers have gone beyond test scores to evaluate the effects of schools and school-choice programs on such student outcomes as high school graduation rates, postsecondary schooling, tolerance, satisfaction, and criminal behavior, all significant concerns for both parents and policymakers. Limiting the definition of “school performance” to math performance is a major limitation of the book, especially since the ECLS-K included some of these additional outcome measures.

Second, the Lubienskis’ own writing indicates that the math tests they used to evaluate student achievement are biased in favor of public schools. They discuss how the professional development of math teachers changed in the late 1980s to emphasize math reasoning and problem solving and de-emphasize math facts and computations. Public school teachers were more likely to embrace this new math curriculum, while private school teachers tended to continue to emphasize traditional math content. The Lubienskis point out that both the NAEP and ECLS-K were altered, prior to the data collection for their study, to focus more heavily on the math content that was being taught in the public schools but not as much in the private schools. Thus, theirs is a study of how well private and public school students have learned the brand of math taught in the public schools. In researcher parlance, the math tests used in this study are “overaligned” with the public school condition and thus a biased measure of relative performance.

Third, in the statistical models for their NAEP analysis, the authors use measures of student participation in government-sponsored programs as key control variables. Controlling statistically for differences in student characteristics avoids crediting schools for producing outcomes that are instead the result of differences in the students that attend them. But such statistical modeling of comparative student test-score outcomes is a tricky business, and the results of such exercises often vary dramatically depending on which control variables are included in the model and how those variables are constructed. In the NAEP analysis, the authors estimate student poverty with data from the federal lunch program and estimate additional student characteristics using data on possession of an Individualized Education Program (IEP) and English Language Learner (ELL) status, admittedly a common practice when analyzing education data.

Variables that measure student differences based on participation in government programs are problematic, however, especially when comparing different school sectors, since government-run public schools are much more likely to participate in such programs than are privately run schools, even if both types of schools have similar student populations. The authors acknowledge that their original measure of family income via the federal lunch program was biased against the private schools, many of which do not participate. They therefore impute family income for private school students at nonparticipating schools based on students’ answers to questions about resources in their homes. If the “resources in the home” variable is a reliable proxy for family income data, as the authors claim, then why use the flawed federal lunch program variable at all?

A similar problem arises with measuring special education needs with the IEP data. The authors acknowledge this concern in a footnote in an appendix, noting, “There may be differences in IEP use in public and private schools.” On this point the Lubienskis are absolutely correct. My colleagues and I have shown that such differences exist in a study that followed a group of students into and out of public and private schools in Milwaukee (see “Special Choices,” features, Summer 2012). The same student was about twice as likely to be classified as having a disability when observed in a public compared to a private school. Simply omitting the IEP variable from the statistical model, the authors admit, improves the statistical estimate of the relative performance of the private school sector so that it is approximately equal to or only slightly below the public school sector.

The authors do not acknowledge the potential problem of inconsistent practices of ELL designation across the public and private sectors, and neither adjust that key control variable accordingly nor report what happens if it is omitted from the statistical model.

In their ECLS-K analysis, the authors use measures of student characteristics that are different and almost certainly less biased than the ones they use in their NAEP analysis. They similarly find that public schools generate higher student math scores than private schools. The performance differences, however, while statistically significant, are much smaller than those found in the NAEP analysis, which supports the view that control variables used in the NAEP models are biased against private schools.

Finally, the Lubienskis exclude from their analysis the students in the ECLS-K database who switched school sectors in the course of the longitudinal study. Doing so raises the threat of bias in their comparisons, as the students who leave private and public schools may differ in unmeasurable ways. They accuse experimental studies of private school voucher programs, which track study participants over time, of doing the same thing: “As voucher studies have demonstrated, significant numbers of lower performing students tend to drop out of the private schools, leaving more motivated voucher students in the study and thereby perverting the integrity of the treatment group.”

The Lubienskis are claiming that voucher experiments treat program attrition (i.e., leaving a private school) as study attrition (i.e., leaving the study), when no such experiments have done that. In every experimental evaluation of private school voucher programs, the students who won the voucher lottery but did not consistently use their voucher to attend private schools have remained in the study over time as members of the treatment group, and the students who lost the voucher lottery but enrolled in private school have remained in the study as members of the control group. Doing so preserves the equivalence of the two groups of students over time. Every reliable longitudinal study of private versus public schooling handles sector switchers in this scientific way, and the Lubienskis should have as well, but did not.

The authors devote the concluding chapter to claims that their findings undermine the case for private school vouchers. They do not. Even putting aside the methodological flaws discussed above, all of which bias the comparison of results against the private school sector, this book has nothing to say empirically about private school voucher programs. Voucher recipients make up a tiny fraction of private school students in the data sets the authors examine, especially since the data predate most of what are still very small programs scattered across the country. Thus, the authors of The Public School Advantage claim to invalidate private school vouchers by studying an environment where they are largely absent.

Patrick J. Wolf is professor in the department of education reform at the University of Arkansas.

This article appeared in the Summer 2014 issue of Education Next. Suggested citation format:

Wolf, P.J. (2014). Comparing Public Schools to Private: Lubienskis’ conclusions rely on flawed research design. Education Next, 14(3), 52-54.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College