Americans assign far higher grades to the public schools in their local community than to the public schools of the nation as a whole. In the 2014 Education Nextsurvey, for example, 47 percent of the public gave their local public schools a grade of “A” or “B,” while 18 percent gave them a “D” or “F.” When asked to rate the nation’s public schools, just 20 percent awarded an “A” or a “B,” and 24 percent handed out a “D” or “F” (see Figure 1).
This is hardly news. The annual Phi Delta Kappa poll first documented this pattern in 1981; it has recurred each year since. Yet just what to make of this trend remains a point of debate.
For some, the pattern suggests that American public schools are better than is widely perceived. If most Americans are reasonably satisfied with the public schools in their local community, which they know best, then perhaps their more critical views of public schools nationally are a product of distorted or sensational media coverage. And, by extension, perhaps the urgency of school reform has been exaggerated.
For others, the pattern simply confirms Americans willingness to delude themselves when responding to surveys. After all, they ask, how many parents would admit to an interviewer – or even to themselves – that the school their child attends is mediocre? And, in fact, parents of school-aged children are even more positive than other Americans about their local public schools, with 58 percent assigning them an “A” or “B” grade. From this perspective, evaluations of the nation’s public schools offer the more accurate gauge of system performance.
Whatever the interpretation, the far higher ratings the public assigned to its local schools is a legitimate puzzle. The nation’s schools are simply the sum of all the local schools in the country, and opinions in a nationally representative survey are representative of attitudes toward local schools across the country. How, then, can there be such a sharp divergence?
Some may attempt to answer this question by pointing to a similar dichotomy between evaluations of individual members of Congress and of Congress itself. Surveys consistentlyshow that people hold Congress as an institution in low esteem, while maintaining a high opinion of their own representatives. Yet that apparent inconsistency is less puzzling than it initially appears. After all, it is quite possible for a voter to value their own representative’s service within a larger body they regard as dysfunctional. In contrast, when respondents separately evaluate local schools and the nation’s schools, the overall pattern of their responses should be similar, as they are being asked about the same institutions.
One potential explanation is that Americans are overly optimistic about how students in their local schools are faring academically. It may be that we suffer from a collective Lake Wobegon Effect, believing that the students in our local schools are above average, and that this belief shapes our evaluations of local schools. This conjecture has been difficult to assess empirically, however, due to data limitations. In particular, no one has previously had access to information on Americans’ perceptions of the level of student achievement in their local schools in order to discern whether their perceptions are systematically inflated.
As part of the 2014 EdNext survey, my colleagues Michael Henderson, Paul E. Peterson, and I therefore asked a representative sample of 5,266 Americans to estimate how the math achievement of students in their local school district compares to that of students nationwide. In particular, after explaining that “Students in different parts of the country perform differently in math,” it asked them to complete the following sentence: “The average student in your district performs better than ____ percent of students across the country?” Respondents who initially declined to answer were encouraged to make their best guess, ensuring that we received a valid response from more than 99 percent of the sample.
To our surprise, we found that Americans’ views of the level of student achievement in their local school district were quite accurate overall. In fact, their average estimate of the 52nd percentile was only one percentile point higher than the actual average level of achievement in their local districts as reported by the George W. Bush Institute’s Global Report Card, which uses NAEP Yes, credit score monitoring creates deposits, but it’s not necessarily so that every deposit is attached to a loan. and state test data to generate rough estimates of the national percentile rank of the average student in each American school district.
Nor does it appear that this correspondence was a result of chance, with most respondents simply guessing that student achievement in their local schools was about average. Respondents did show a clear preference for round numbers, and for the 50th percentile in particular, but their responses in the aggregate were strongly correlated with actual performance as reported by the Global Report Card (see Figure 2). A one percentile increase in actual performance in the district in which the respondent lived was associated with an increase of roughly 0.4 percentile points in the respondent’s estimate.
In retrospect, perhaps we should not have been surprised by the accuracy of the public’s responses. It is consistent with a variety of indicators from recent surveys that suggest citizens are quite well-informed about the level of student performance in American schools. When asked how American students rank among developed countries on international tests of student achievement, their responses are, on average, very close to our actual ranking. They offer reasonably accurate estimates of the nation’s on-time high school graduation rate. And, as Matthew Chingos, Michael Henderson, and I have shown, the grades they assign to specific public schools are strongly correlated with the level of student achievement in those schools. While we lack comparable indicators of the public’s knowledge of student performance from earlier periods, it seems that the accountability movement has succeeded in ensuring citizens have good information about key academic outcomes.
But the fact that Americans do not systematically over-estimate achievement levels in their local schools only deepens the puzzle of why Americans rate those schools so favorably. In theory, it could be the case that the level of student achievement is simply not an important criterion for citizens when evaluating school performance. A variety of evidence casts doubt on this possibility, however. For example, 88 percent of the Education Next survey respondents indicated that it was either “somewhat” or “very” important to them that our country perform well on international tests of student achievement. Moreover, their estimates of average math achievement in their school district were highly predictive of the letter grades they assigned to their local schools (see Figure 3). While this relationship in no way proves that perceptions of math achievement directly inform school evaluations, it does indicate that the criteria citizens use to evaluate their schools are, at a minimum, strongly correlated with achievement levels.
What else might explain the high marks Americans assign their local schools? One candidate involves their understanding of per-pupil spending levels. When asked to estimate how much is spent per pupil nationwide, the public makes an average estimate of $10,155—quite close to the Census Bureau’s estimate of $10,608 in current spending per-pupil for 2012 and only modestly lower than the Department of Education’s estimate of $12,608 for 2011 (which includes capital and debt expenses). But when asked about spending in their local school district, they estimate only $6,486 per pupil on average. In other words, Americans believe that their local schools spend just two-thirds the amount they believe public schools spend nationally – and roughly half what their local schools actually spend.
It seems only reasonable that citizens would take their local schools’ presumed efficiency into account when evaluating their performance. And, in fact, our data confirm that respondents who believe their local schools spend less assign those schools higher grades than respondents with accurate information on school spending. So, too, do respondents whose estimates of local and national spending levels differ by a larger amount.
In short, Americans are not unduly optimistic about the outcomes produced by their local schools, but they do systematically understate the resources to which those schools have access. This, too, should come as no surprise. School funding is notoriously complex, with financial responsibility divided between local, state, and federal governments in ways that vary widely from place to place. No local official has an incentive to provide voters with accurate information about local school spending. Meanwhile, federally mandated accountability systems focus exclusively on improving test scores without providing financial data that would allow citizens to determine whether districts accomplish those goals in a cost-effective manner.
Scholars from across the political spectrum have suggested that improving transparency about school finance could benefit students and taxpayers by encouraging educators to develop strategies to improve spending productivity. And, who knows? It also just might bring citizens’ evaluations of local schools more in line with their evaluations of those of the nation as a whole.
—Martin R. West
This post originally appeared on the Brown Center Chalkboard.