Among the most common rationales offered for the Common Core State Standards project is to eliminate differences in the definition of student proficiency in core academic subjects across states. As is well known, the federal No Child Left Behind Act of 2002 (NCLB) required states to test students annually in grades 3-8 (and once in high school), to report the share of students in each school performing at a proficient level in math and reading, and to intervene in schools not on track to achieve universal student proficiency by 2014. Yet it permitted states to define proficiency as they saw fit, producing wide variation in the expectations for student performance from one state to the next. While a few states, including several that had set performance standards prior to NCLB’s enactment, have maintained relatively demanding definitions of proficiency, most have been more lenient.
The differences in expectations for students across states are striking. In 2011, for example, Alabama reported that 77 percent of its 8th grade students were proficient in math, while the National Assessment of Educational Progress (NAEP) tests administered that same year indicated that just 20 percent of Alabama’s 8th graders were proficient against NAEP standards. In Massachusetts, on the other hand, roughly the same share of 8th graders achieved proficiency on the state test (52 percent) as did so on the NAEP (51 percent). In other words, Alabama deemed 25 percent more of its students proficient than did Massachusetts despite the fact that its students performed at markedly lower levels when evaluated against a common standard. U.S. Secretary of Education Arne Duncan has gone so far as to accuse states like Alabama of “lying to children and parents” by setting low expectations for student performance.
There’s no doubt that the definition of proficiency in many states provides a misleading view of the extent to which students are prepared for success in college or careers. Yet whether the way in which states define proficiency matters for student achievement is far from clear. As Tom Loveless demonstrated in the 2012 Brown Center Report on American Education, the rigor of state proficiency definitions is largely unrelated to the level of student achievement on the NAEP across states. Similarly, Russ Whitehurst and Michelle Croft have shown that the quality of state standards (as assessed by third party organizations) is unrelated to NAEP scores, a finding confirmed by the Harvard Kennedy School’s Josh Goodman in an analysis that examined the effects of changes in the quality of standards within states over time. The lack of a systematic relationship between either the rigor or the quality of state standards and student achievement casts doubt on claims that higher and better standards under the Common Core will, in and of themselves, spur higher student achievement.
Less attention has been paid to whether the rigor of state standards matters for public perceptions of the quality of the schools in their states and local communities. If using a more lenient definition of proficiency leads citizens to evaluate their schools more favorably, then the advent of common expectations under the Common Core could alter public perceptions quite dramatically – perhaps increasing pressure for reform in regions of the country in which state proficiency definitions have provided an inflated view of student accomplishment. Is such an outcome likely?
To shed light on this question, I use data from two surveys conducted in 2011 and 2012 under the auspices of Education Next and the Program on Education Policy and Governance at Harvard University. In each year, my colleagues and I asked a nationally representative sample of roughly 2,500 Americans to grade the public schools in their local community on a standard A-F scale. In the figures below, I examine whether the average grade the residents of each state assigned to their local schools is associated with the share of 2011 8th graders deemed proficient by the state’s own test and by the NAEP. To the extent that differences in the definition of proficiency from one state to the next interfere with citizens’ ability to discern the performance of their local schools, we should see that the average grades citizens assign their schools hew more closely to proficiency rates as determined by state tests than by the NAEP.
The figures demonstrate the opposite. Figure 1a shows that average citizen ratings of local schools across states are only weakly correlated with 8th grade proficiency rates on state tests. Although the relationship is statistically significant, it is quite small in size: a 10-percentage-point increase in the share of students deemed proficient is associated with an increase in citizen ratings of just 0.03 points on a GPA-style scale (i.e., A=4.0; F=0). Figure 1b, in contrast, reveals a markedly stronger relationship between citizen ratings and NAEP proficiency rates, with a 10-percentage-point increase in proficiency associated with an increase in citizen ratings of 0.17 grade points.
Figure 1a: Relationship between the Average Grades Assigned to Local Public Schools and Proficiency Rates on State Tests
Figure 1b: Relationship between the Average Grades Assigned to Local Public Schools and Proficiency Rates on the National Assessment of Educational Progress (NAEP)
Source: Author’s calculations based on data from the 2011 and 2012 EdNext-PEPG Surveys, state education agency websites, and the NAEP Data Explorer.
Notes: Average grades are reported on a standard GPA scale (i.e., A=4, F=0). State and NAEP proficiency rates are the average of 8th grade proficiency rates in math and reading. The regression analyses used to generate fitted values are weighted by the inverse of each observation’s estimated variance to account for differences in the number of respondents from each state; unweighted regressions yield substantively similar results.
A simple regression of the average grades citizens assign to local schools in each state on NAEP and state proficiency rates simultaneously confirms that average grades (1) are strongly correlated with NAEP proficiency rates and (2) after controlling for NAEP proficiency rates, have no relationship whatsoever with proficiency rates on state tests. An increase in NAEP proficiency rates of 32 percentage points – the difference between Washington DC and Massachusetts – is associated with an increase in citizen ratings of more than a half of a letter grade. Holding NAEP scores constant, a difference in state test proficiency rates matters not at all.
In short, this evidence suggests that Americans have been wise enough to ignore the woefully misleading information about student proficiency rates generated by state testing systems when forming judgments about the quality of their state’s schools. This does not mean that they ignore state testing data altogether. Indeed, Matthew Chingos, Michael Henderson and I have shown that, within a given state, the grades citizens assign to specific elementary and middle schools are highly correlated with state proficiency rates in those schools. Nor does it necessarily imply that information from the NAEP has a causal effect on perceptions of school quality. The relationship between NAEP performance and the grades citizens assign their schools could easily be driven by other variables, such as the prosperity level of the state, that influence student achievement levels and could also influence school grades. Yet it does suggest that the implementation of the Common Core, by providing information about performance against a common standard, may have less of an impact on public perceptions of school quality than many have projected.
—Martin West
This blog entry first appeared on the Brown Center Chalkboard from the Brookings Institution.