This is the second article in a series that looks at a recent AEI paper by Collin Hitt, Michael Q. McShane, and Patrick J. Wolf, “Do Impacts on Test Scores Even Matter? Lessons from Long-Run Outcomes in School Choice Research.” The first essay is here.
The AEI paper focuses on a specific question: Is there is a disconnect for school choice programs when it comes to their impact on student test scores versus their impact on student attainment outcomes, namely high school graduation, college entrance, and college graduation rates?
It claims to find such a disconnect. As the authors put it, “A school choice program’s impact on test scores is a weak predictor of its impacts on longer-term outcomes.” But read the fine print because this conclusion follows from two big decisions the authors made, both of which are highly debatable. Had they gone the other way, the results would show an overwhelmingly positive relationship between short-term test score changes and long-term outcomes.
What were those decisions?
1. The authors included programs in the review that are only tangentially related to school choice and that drove the alleged mismatch, namely early-college high schools, selective-admission exam schools, and career and technical education initiatives.
2. Their coding system—which they admit is “rigid”—sets an unfairly high standard because it requires both the direction and statistical significance of the short- and long-term findings to line up.
I am also concerned that the authors extrapolated from their findings about programs and inappropriately applied them to schools.
In this post, I tackle the first concern: whether Hitt, McShane, and Wolf included studies they should not have. How might their results look had they chosen differently?
What counts as a school choice program?
The AEI authors were extremely inclusive when deciding which studies to review. In their words:
We use as broad a definition as possible for school choice. We do so for two reasons. First, we wanted to gather the largest number of studies possible to examine the relationship between achievement and attainment impacts across studies. Second, school choice is bigger than voucher programs and charter schools. Many large districts have embraced a portfolio model of school choice governance, which intentionally offers a wide array of public (and sometimes private) school choices to parents. The diversity of the studies we collected mirrors the diversity of choice options that portfolio school districts attempt to offer.
That’s fair up to a point; surely looking beyond just vouchers and charter schools makes sense in a world with many kinds of choice. But I would add two common-sense criteria for deciding what should be in and what should be out. First, is school choice central to the program, or is it incidental? And second, is it a program for which a reasonable person would expect to see both achievement gains and positive long-term outcomes if the program is successful? Let’s apply these criteria to the types of programs the authors reviewed.
There’s no doubt that private school choice, open enrollment, charter schools, STEM schools, and small schools of choice count as “school choice” programs, ones that legitimately could be expected to boost both short-term test scores and long-term outcomes for their participants if effective.
But the other categories fail our test. Early-college high schools are nominally schools of choice. No one is forced to attend them, but choice is incidental to the model. The heart of the approach is getting students on college campuses, taking college courses, and pointing them toward a possible associate degree along with their high school diploma.
Likewise with selective enrollment high schools, such as Stuyvesant High School in New York City. Sure, students “choose” to go to Stuyvesant in the same way that they “choose” to go to Harvard. But if ever there was an example in which schools choose the students rather than the other way around, this is it. And given these schools’ highly unusual, super-high-achieving student populations, we wouldn’t be surprised if was hard to detect either achievement or attainment effects. Everyone who goes to these schools or almost gets in (the students used as controls in the relevant studies) are super high achieving and likely to graduate from college, not to mention high school, regardless.
Or consider Career Academies and other forms of career and technical education. Students enroll voluntarily in these programs, but what’s much more important is their unique missions and designs. We wouldn’t necessarily expect CTE students to show as much academic progress as their peers in traditional high schools because they are spending less of their time on academics! But high quality CTE could still boost high school graduation, postsecondary enrollment, and postsecondary completion.
So let’s see how the results look for the bona fide school choice programs versus the others. (See here for a table listing all studies, how they were coded by the AEI authors, and which ones I counted as “school choice.”)
Impact on Achievement: English Language Arts
School Choice Programs (23) |
Non-School Choice Programs (13) |
Total (36) |
|
Positive and Significant |
8 (38%) |
3 (21%) |
11 (31%) |
Positive and Insignificant |
8 (38%) |
5 (36%) |
13 (36%) |
Negative and Insignificant |
4 (19%) |
5 (36%) |
9 (25%) |
Negative and Significant |
3 (14%) |
0 (0%) |
3 (8%) |
Impact on Achievement: Mathematics
School Choice Programs (21) |
Non-School Choice Programs(13) |
Total (34) |
|
Positive and Significant |
9 (43%) |
2 (15%) |
11 (32%) |
Positive and Insignificant |
7 (33%) |
8 (62%) |
15 (44%) |
Negative and Insignificant |
4 (19%) |
3 (23%) |
7 (21%) |
Negative and Significant |
1 (5%) |
0 (0%) |
1 (3%) |
Impact on High School Graduation
School Choice Programs (22) |
Non-School Choice Programs(12) |
Total (34) |
|
Positive and Significant |
10 (45%) |
7 (58%) |
17 (50%) |
Positive and Insignificant |
7 (32%) |
5 (42%) |
12 (35%) |
Negative and Insignificant |
2 (9%) |
0 |
2 (6%) |
Negative and Significant |
3 (14%) |
0 |
3 (9%) |
Impact on College Enrollment
School Choice Programs (8) |
Non-School Choice Programs (11) |
Total (19) |
|
Positive and Significant |
4 (50%) |
5 (45%) |
9 (47%) |
Positive and Insignificant |
3 (38%) |
4 (36%) |
7 (37%) |
Negative and Insignificant |
1 (12%) |
2 (18%) |
3 (16%) |
Negative and Significant |
0 (0%) |
0 (0%) |
0 (0%) |
Impact on College Graduation
School Choice Programs (2) |
Non-School Choice Programs(9) |
Total (11) |
|
Positive and Significant |
1 (50%) |
3 (33%) |
4 (36%) |
Positive and Insignificant |
1 (50%) |
3 (33%) |
4 (36%) |
Negative and Insignificant |
0 (0%) |
3 (33%) |
3 (27%) |
Negative and Significant |
0 (0%) |
0 (0%) |
0 (0%) |
Several findings emerge from these tables. First, the school choice programs show overwhelmingly positive impacts on both achievement and attainment; it’s more of a mixed bag for the non-school choice programs, which tend to do worse on achievement but well on attainment. And that leads to the second key point: the mismatch between short-term test scores and long-term outcomes—to the extent that it happens—is driven largely by the non-school choice programs. For bona fide school choice, there’s a very good match: 38 percent of the studies show a statistically significantly positive impact in ELA, 43 percent in math, 45 percent for high school graduation, 50 percent for college enrollment, and 50 percent for college graduation. If we look at all positive findings regardless of whether they are statistically significant, the numbers for school choice programs are 76 percent (ELA), 76 percent (math), 77 percent (high school graduation), 88 percent (college enrollment), and 100 percent (college graduation). Everything points in the same direction, and the outcomes for achievement and high school graduation—the outcomes that most of the studies examine—are almost identical.
So is it true, as Hitt, McShane, and Wolf claim, that “a school choice program’s impact on test scores is a weak predictor of its impacts on longer-term outcomes”? It sure doesn’t appear so. But perhaps these aggregate numbers are hiding a lot of mismatch at the level of individual programs. We’ll dig into that possibility next week.
— Mike Petrilli
Mike Petrilli is president of the Thomas B. Fordham Institute, research fellow at Stanford University’s Hoover Institution, and executive editor of Education Next.
This first appeared on Flypaper.