Keeping an Eye on State Standards
A race to the bottom?
While No Child Left Behind (NCLB) requires all students to be “ proficient” in math and reading by 2014, the precedent-setting 2002 federal law also allows each state to determine its own level of proficiency. It’ s an odd discordance at best. It has led to the bizarre situation in which some states achieve handsome proficiency results by grading their students against low standards, while other states suffer poor proficiency ratings only because they have high standards.
A year ago, we first sought to quantify this discrepancy (“ Johnny Can Read … in Some States, ” features, Summer 2005), showing which states were upholding rigorous standards and which were not.
We return to the subject now, with the latest available data, to update our ratings. The standard we again use is the National Assessment of Educational Progress (NAEP), the nation’ s “ report card, ” and still the only metric that allows strict comparisons between states. For each state where both NAEP and state accountability measures were available, we computed a score based on the difference between the percentage of students said to be proficient by the state and the percentage identified as proficient on the NAEP in years 2003 and 2005.
We are not evaluating state tests, nor are we grading states on the performance of their students. Instead, we are checking for “ truth in advertising, ” investigating whether the proficiency levels mean what they say. We are thus able to ascertain whether states lowered the bar for student proficiency as the full panoply of NCLB provisions took effect.
When we conducted the first of our checkups on the rigor of the standards, we gave each state the same kind of grade students receive. Where the requisite information was available, states with the highest standards were given an A; those with the lowest standards, an F. Last year, the requisite data were available for only 40 states. This time around, 48 states have been graded, including nine “ new” states providing the necessary information for the first time (see Figure 1). While the fact that these nine are now in compliance with NCLB is a laudable accomplishment, it is not clear how committed they are to the enterprise: among the nine, only the District of Columbia and New Mexico scored a grade higher than C, and Nebraska, Utah, Iowa, Oregon, and Nevada could do no better than a mediocre C or D. The first grades garnered by Alabama, Nebraska, and West Virginia were D minuses. Clearly, student proficiency has entirely different meanings in different parts of the country.
In 2003 and 2005, both state and NAEP tests were given in math and reading for 4th and 8th-grade students. The grades reported here are based on the comparison of state and NAEP proficiency scores in 2005, and changes for each are calculated relative to 2003. For each available test we computed the difference between the percentage of students who were proficient on the NAEP and the percentage reported to be proficient on the state’ s own tests for the same year. We also computed the standard deviation for this difference. We then determined how many standard deviations each state’ s difference was above or below the average difference on each test. As with last year, the scale for the grades was set so that if grades had been randomly assigned, 10 percent of the states would earn As, 20 percent Bs, 40 percent Cs, 20 percent Ds, and 10 percent Fs. Each state’ s grade is based on how much easier it was to be labeled proficient on the state assessment as compared with the NAEP. For example, on the 4th-grade math test in 2005, South Carolina reported that 41 percent of its students had achieved proficiency, but 36 percent were proficient on the NAEP. The difference (41 percent – 36 percent = 5 percent) is about 1.4 standard deviations better than the average difference between the state test and the NAEP, which is 31 percent. This was good enough for South Carolina to earn an A for its standards in 4th-grade math. The overall grade for each state was determined by taking the average for the standard deviations on the tests for which the state reported proficiency percentages.
— Paul Peterson and Frederick Hess
Meanwhile, five states that previously had their accountability systems in place are letting their standards slide. The biggest decline was in Arizona, with significant drops also found (in order of magnitude) in Maryland, Ohio, North Dakota, and Idaho. If parents in these states read that students are making great strides on state proficiency tests, they would be advised to consider the message with a healthy dose of skepticism. At least some of the reported student gains appear to be the product of gamesmanship.
In addition, states with already low standards have done nothing to raise them. Oklahoma and Tennessee once again share the cream puff award, with both states earning Fs because their self-reported performance is much higher than can be justified by the NAEP results. States with nearly equally embarrassing D minuses included Mississippi, Georgia, and North Carolina. Once again, we discover that Suzy could be a good reader in North Carolina, where standards are low, but a failure in neighboring South Carolina, where standards are higher.
Still, there are happier stories to tell. Montana is the most improved state. Others that have significantly boosted their proficiency standards relative to the NAEP include Texas, Arkansas, and Wisconsin.
Best of all, a handful of states continued to impress for a second consecutive year, grading their own performance on a particularly tough curve. Massachusetts, South Carolina, Wyoming, Maine, and Missouri all once again earned As.
Shining a light on the standards that states set is crucial, as it helps remind state officials that there is a right way and a wrong way to ace a test. Of course, having high standards is not enough. It is the crucial first step, but the next, and more difficult one, is to make sure that a high percentage of students reach that standard. In that regard, all states need to do much better, if no child is to be left behind.
Paul E. Peterson and Frederick M. Hess are editors of Education Next. Mark Linnen provided research assistance.
Sign Up To Receive Notification
when the latest issue of Education Next is posted
In the meantime check the site regularly for new articles, blog postings, and reader comments