Since 2002, federal law has conditioned Title I funding on states’ participation in the biannual administration of the National Assessment of Educational Progress (NAEP) in math and reading in grades four and eight. This is a boon to us policy wonks because we can study the progress (or lack thereof) of individual states and use sophisticated research methodologies to relate score changes to differences in education policies or practices. That’s the approach that allowed Tom Dee and Brian Jacob, for example, to inform us that NCLB-style accountability likely boosted math achievement in the 2000s.
Over the years, NAEP has also made stars out of leading states and their governors or education leaders, and has galvanized reformers to try to learn from their successes. It started with North Carolina and Texas, which saw stratospheric increases in the 1990s, especially in math, and across all racial groups. Then it was Jeb Bush’s moment in the sun, as Florida’s scores climbed quickly from the late-1990s through the 2000s, with particular progress for black and Hispanic youngsters. Delaware, Minnesota, and even New York have also produced some big improvements at various times. And let us not forget the Massachusetts Miracle.
The release of the 2017 scores on April 10 will give us fresh data for analyses of this sort. As I warned a few weeks ago, rigorous studies will take time. We should be skeptical of anyone who claims on release day to know why certain states are surging ahead or falling behind—though interesting patterns will surely be fodder for hypotheses and further study.
To help us prepare, Fordham’s research interns and I dug into the NAEP data to see which state-level trends are worth watching. As I’ve argued before, we don’t want to over-interpret short-term changes, so it’s better to look at trends that span four years or more, i.e., trends that are based on at least three test iterations. Here’s a look, then, at statistically significant changes from 2011–15 for every state and D.C. Cells that are empty indicate that there were no statistically significant changes; the numbers represent statistically significant scale score changes from 2011 to 2015.
Table 1: Statistically significant changes on NAEP, 2011–15
National trends are essentially flat over this time period, but that hides much state-by-state variation. Nineteen states, for example, improved their reading results over that four-year period, while just five saw declines. But the opposite is true for math. Scores declined in twenty states, rose in just nine, and were mixed in two.
This table makes it easy to understand why there’s been so much recent buzz about the District of Columbia and Tennessee. But it also suggests that Indiana and Nebraska deserve a lot of love for posting gains in three of the four categories since 2011. And we should also laud Louisiana, Mississippi, and Wyoming for producing statistically significant gains among fourth graders in both reading and math.
But let’s not just look at the leaders; the laggards are interesting as well. Maryland had the worst showing. In 2013, the state got caught for having long excluded many students with disabilities from taking the assessment. Now that it’s properly including these children, its scores have plummeted. But it’s hardly alone; there were plenty of other backsliders, especially in math: Arkansas, Colorado, Delaware, Kansas, Montana, Nevada, and Vermont all saw declines in both fourth and eighth grades. Yikes.
As with the national data, we should be mindful that demographic changes can affect achievement trends, especially as Hispanic students make up an ever-larger proportion of our population. So if we want to understand which state policies and practices might be helping or hurting, we need to find a way to deal with these demographic trends. Matt Chingos and his Urban Institute colleagues offer one clever way of doing that, adjusting the scores for demographic changes over time. Another approach—which we’ll use here—is to analyze results for each of the three major racial groups, rather than for the overall student population. So let’s take a look at that, first for reading and then for math.
Table 2: Statistically significant changes in reading scale scores, 2011–15
Disaggregating by race reaffirms Indiana’s and Tennessee’s success, as both produced gains in four of the six categories. But it also identifies some additional states worthy of praise for making gains across several categories in reading, including Alaska, Arizona, California, D.C., Hawaii, Iowa, North Carolina, Oregon, Utah, and West Virginia.
Now let’s look at math.
Table 3: Statistically significant changes in math scale scores, 2011–15
As we saw for states overall, there’s less good news in math than in reading, but D.C., Indiana, Tennessee, Nebraska, Louisiana, and Mississippi—all flagged above for producing good progress for all students—deserve praise for improved subgroup performance. Arizona also looks good when viewed in this manner. On the other hand, Texas should worry about its across-the-board declines in 8th grade math. And what’s the matter with Kansas?
* * *
What to make of all this? The District of Columbia, Indiana, and Tennessee clearly have momentum going into the 2017 NAEP release, with the broadest gains in both subjects and grade levels. All three have been reform hot spots, making them all the more interesting. (Though no, we can’t prove that their reforms—much less which of their reforms—get the credit.)
The District of Columbia has of course been in the news of late for scandals that have taken the shine off the Michelle Rhee/Kaya Henderson apple. Everyone will be watching to see what the NAEP scores show, and what that means for the debate about whether the progress in DCPS is real. (Keep in mind that the data above are for all students—those in public and charter schools.)
Louisiana and Mississippi are both showing some promise, which is praiseworthy given how far they still have to go. We should keep our eyes on a few other “sleepers”: Iowa, Nebraska, Oregon, and Utah. And are we seeing a resurgence from North Carolina? What about California?
If the states are our laboratories of democracy, we should get some great new data from their education policy experimentation on April 10.
— Mike Petrilli
Mike Petrilli is president of the Thomas B. Fordham Institute, research fellow at Stanford University’s Hoover Institution, and executive editor of Education Next.
This post is the third in a series of commentaries leading up to the release of new NAEP results on April 10. The first post discussed the value of the NAEP; the second looked at recent national trends.
The original version of this article stated that the results reported here were for public, charter, and private schools. That was incorrect; it’s only for public and charter schools. Thanks to the Bluegrass Institute’s Richard Innes for pointing out the mistake.
This post originally appeared in Flypaper.