The single best thing that could happen to American education in the next few years would be for the National Assessment of Educational Progress (NAEP) to begin regularly reporting state-by-state results at the twelfth grade level.
That this isn’t happening today is a lamentable omission, albeit one with multiple causes. But it’s fixable. All it requires is determination by the National Assessment Governing Board (NAGB) to make this change, some more contracting and sampling by the National Center for Education Statistics (NCES), and either a smallish additional appropriation or some repurposing of present NAEP budgets.
Way back in the late Middle Ages (i.e., 1988), Congress authorized NAEP to begin gathering and reporting state-level data to states that wanted it. This was a direct response to frustration on the part of governors in the aftermath of A Nation at Risk that they could not validly compare their own states’ academic performance to that of the nation or other states. Secretary Bell had responded with the famous “Wall Chart” of the mid-80’s, but its comparisons were based on SAT scores and other measures that were neither representative nor helpful for gauging achievement in the elementary and middle grades.
The Council of Chief State School Officers finally withdrew its ancient hostility to interstate comparisons, the Southern Regional Education Board piloted some comparisons among its member states, and the Alexander-James commission (chaired by then-Tennessee governor Lamar Alexander) recommended that NAEP take this on. In short order, an old-fashioned bipartisan agreement emerged, this time between the Reagan Administration and Senator Ted Kennedy.
Initially dubbed the “Trial State Assessment,” it was voluntary, it was limited to grades four and eight, and states that wanted in had to share the cost. But participation grew swiftly. By 1996, for example, forty-seven states and territories took part in the fourth grade math assessment and forty-four opted into eighth grade math. Already, more than thirty jurisdictions could see how their 1996 results compared with their 1992 results in math—and much the same thing happened in reading.
Then came NCLB in 2001 and suddenly all states were required to take part in fourth and eighth grade math and reading. The assessment cycle was accelerated to every two years, and Uncle Sam paid the full tab. (The Trial Urban District Assessment also launched in 2002 with six participating cities. By 2017, there were twenty-seven.)
Yet twelfth grade NAEP remained almost entirely confined to the national level, although NAGB and NCES piloted some state participation in 2009 and 2013, with thirteen states volunteering in the latter year. That it was tried shows that it can be done. But there’s been no follow-up. State-level twelfth grade data wasn’t even an option in the 2015 or 2017 assessment cycles.
Why the neglect? Budget has surely been a factor, but only one. Keep in mind that federal policy—at least the test-related elements of it—has concentrated on the elementary and middle grades and the only statutory NAEP mandates are for grades four and eight. Moreover, high-school curricula are more varied and a lot of kids are no longer in school by twelfth grade, meaning that a sample represents students but not all young people in that age group. There’s also been widespread mistrust of the twelfth grade assessment itself, particularly as to whether students take it seriously and produce reliable data.
To examine the latter point, NCES and NAGB undertook many studies, convened expert panels, and more. The upshot is pretty convincing evidence that kids do complete the twelfth grade test faithfully enough to yield solid results. What’s more, another set of studies undertaken by NAGB showed that a “proficient” score on the twelfth grade reading assessment—and a somewhat lower cut-point on the math test—are good proxies for “college readiness.” (Career readiness is less clear. Jobs are so varied and proficiency in reading and math is just part of what’s needed. NAEP isn’t an optimal gauge.)
Now consider where things stand in American education in 2018. Several developments seem to me compelling.
First, as is well known, ESSA has restored greater authority over education—and school accountability—to the states. States therefore have greater need for trustworthy data on education outcomes.
Second, the country is all but obsessed with whether kids are “college and career ready” at the end of high school—and also obsessed (with increasingly worrisome side effects) with graduation rates.
Yet, third, it’s really hard for educators and policy makers to know what college-ready means, how many of the students in one’s school, district, or state are attaining that level, and how that compares with other states and the nation as a whole. The Common Core State Standards did a good job of cumulating to college and (they said) career readiness by the end of high school, but that’s only helpful if states use those or equally rigorous academic standards and if the assessments based on such standards are truly aligned with them, have rigorous scoring standards, and set their “cut scores” at levels that denote readiness for college-level work.
For political reasons, however, dozens of states have bailed from—or at least repackaged—the Common Core (at Fordham, we plan to evaluate the standards that were substituted) and the multi-state testing consortia that were supposed to deliver comparable state data are both shrinking and refusing to make interstate comparisons.
The upshot: Even as they write “college and career readiness” rates into their ESSA plans, many states have no reliable way to determine how many of their high school seniors are reaching that point and, regardless of what they use for standards and tests, practically none will be able to make valid comparisons with other states. Which is apt to put even more ill-considered pressure on graduation rates or else throw states back to SAT and ACT results even when those are useless for students who don’t take the tests. (Some states therefore now mandate—and pay for—SAT or ACT results for all their high school students. But those scores can’t be compared with anything from earlier grades.)
Back to NAEP: Now is the perfect time to resume reporting results for twelfth graders on a state-by-state basis and to do so on a regular cycle. As happened with grades four and eight, this could restart on a voluntary basis for jurisdictions that want it and—if federal budgets are tight—they could be asked to cover some of the cost. Reading, writing, and math are the obvious subjects to do this with, but how great it would be also to report twelfth grade state results in other core subjects, particularly science and history!
How often? NAEP scores don’t change much in two years. Four-year intervals would likely suffice for twelfth grade. (Indeed, much money could be recaptured for the budget if fourth and eighth grade reading and math testing were switched back to a four-year cycle, although that change needs Congressional assent.)
What would this do for the country? It would—obviously—give participating states a valid and reliable metric for how many of their students are truly college-ready at the end of high school. Because NAEP is based on a sample, it would discourage the kinds of test prep, credit recovery, grade changing and rate faking that afflict graduation data—and that often afflict state assessments. (Because NAEP employs just a sample of students and schools, it also means less curricular distortion and pressure on teachers.) And it would enable state leaders to see precisely how their twelfth graders are doing when compared with other states and with the country as a whole.
Yes, this really feels like the single best thing that could happen to American education in the next few years. If NAGB, NCES, the Education Department and Congressional appropriators get moving, it could surely happen by the 2021 NAEP cycle—and I’ll bet it could be revived and re-piloted in at least a few states in 2019.
Might such an initiative be announced when the 2017 NAEP results are (finally) unveiled in April?
— Chester E. Finn, Jr.
Chester E. Finn, Jr., is a Distinguished Senior Fellow and President Emeritus at the Thomas B. Fordham Institute. He is also a Senior Fellow at Stanford’s Hoover Institution.
This post originally appeared in Flypaper.