Accountability Lost

Student learning is seldom a factor in school board elections

ednext_20081_66_openerIn school districts across the nation, voters elect fellow citizens to their local school boards and charge them with the core tasks of district management: hiring administrators, writing budgets, negotiating teacher contracts, and determining standards and curriculum, among them. Whatever the task, the basic purpose of all school board activities is to facilitate the day-to-day functioning of schools. If board members do their jobs well, schools should do a better job of educating students.

Not surprisingly, school board members agree that one of their most important goals is to help students learn. According to a 2002 national survey, student achievement ranks second only to financial concerns as school board members’ highest priority. We wondered, though, do voters hold school board members accountable for the academic performance of the schools they oversee? Do they support sitting board members when published student test scores rise? Do they vote against members when schools and students struggle under their watch?

Existing accountability policies assume that they do: states shine light on school performance by providing the public with achievement data. Voters and parents are expected to make use of these data in choosing school districts or schools, and to hold administrators and school board members accountable for the schools’ performance at each election. The idea is that voters will replace incumbents with new members when performance is poor and support incumbents over challengers when performance is strong. Indeed, there are very few other ways in which district officials can be held accountable for school performance. Neither the federal No Child Left Behind Act (NCLB) nor the states impose direct sanctions on members of school boards that oversee large numbers of underperforming schools.

Our questions led us to undertake the first large-scale study of how voters and candidates respond to student learning trends in school board elections. We analyzed test-score data and election results from 499 races over three election cycles in South Carolina to study whether voters punish and reward incumbent school board members on the basis of changes in student learning, as measured by standardized tests, in district schools. In addition, we assessed the impact of school performance on incumbents’ decisions to seek reelection and potential challengers’ decisions to join the race.

We found that in the 2000 elections, South Carolina voters did appear to evaluate school board members on the basis of student learning. Yet in the 2002 and 2004 elections, published test scores did not influence incumbents’ electoral fortunes. As we’ll see, the possible reasons our results differed so dramatically from one time period to the next hold important implications for the design of school accountability policies. But let’s first take a closer look at our methods and findings.

South Carolina

Once we set out to study local school board races, we encountered tall hurdles to obtaining election results. Only one state, South Carolina, centrally collects precinct-level election data for school board races. In all other states, obtaining precinct-level election results requires gathering and organizing election returns from hundreds of individual counties and election districts.

So we took a close look at South Carolina. In most respects, South Carolina elections and school boards are similar to those across the rest of the country. All but 4 of the state’s 46 counties hold nonpartisan school board elections. Approximately 80 percent of school board members receive some compensation, either a salary, per diem payments, or reimbursement for their expenses. Over 90 percent of South Carolina’s 85 school boards have between 5 and 9 members, while the largest board has 11. And, as is common practice in other states, nearly 9 out of 10 South Carolina school districts hold board elections during the general election in November.

Perhaps the most important difference between South Carolina and most other states when it comes to local school politics is the role played by the state’s teachers unions, which are among the weakest in the country. In other states strong teachers unions may mobilize high turnout among members, their families, and friends, and punish and reward board members for their treatment of teachers rather than hold them accountable for student test scores. South Carolina school boards are unlikely to be beholden to the unions, which should make the boards more responsive to the broader public.

Roughly half of the state’s 85 districts hold school board elections in any two-year election cycle. We collected precinct-level election returns for all school board races in three election cycles, 2000, 2002, and 2004. We also obtained school-level student achievement data from the South Carolina Department of Education. We began our analysis with 2000 because it was the first cycle of elections after South Carolina started administering the Palmetto Achievement Challenge Test (PACT) to students in grades 3 to 8 in 1999. These tests, based on the South Carolina Curriculum and Standards, are given in both reading and math. We averaged the reading and math percentile scores to produce a composite score for each school. Because we wanted to examine whether voters are more concerned with student performance districtwide or in their local neighborhood, we computed two measures of average school performance to include in our analysis. The first is the average test score for each district. The second is the average test score for the public school that is located closest to an election precinct.

Searching for Accountability

We began our analysis by comparing the vote shares of incumbent school board members who ran and faced an opponent with the test-score performance of the schools and districts they represented. We were careful to separate the effect of school performance from the effects of other factors that could reasonably influence an incumbent school board member’s vote share. For example, we considered whether voters evaluate student outcomes relative to spending by measuring the effect of changes in the district’s property tax rate. We also took into account features of the election, including whether it was held as part of the November general election or on another date, when turnout is likely to be lower. Additionally, we accounted for the partisanship of the electorate, measured by the Democratic candidate’s share of the presidential vote, and demographic characteristics, such as race, age, and gender. We also adjusted for potential differences in how voters from precincts with higher and lower average test scores respond to changes in test scores. For example, voters from precincts with lower test scores might respond more strongly when test scores improve than do voters from precincts with test scores that already were very high.

In 2000, 67 incumbents from 37 school boards ran for reelection in contested races in South Carolina. Of these 67 incumbents, 50 were reelected, and the median vote share for all incumbents in competitive races was 58 percent.

We found that incumbent school board members won a larger share of the total vote in a precinct when test scores in that precinct improved. We estimate that improvement from the 25th to the 75th percentile of test-score change—that is, moving from a loss of 4 percentile points to a gain of 3.8 percentile points between 1999 and 2000—produced on average an increase of 3 percentage points in an incumbent’s vote share. If precinct test scores dropped from the 75th to the 25th percentile of test-score change, the associated 3-percentage-point decrease in an incumbent’s vote share could substantially erode an incumbent’s margin of victory. In districts where percentile scores had increased in the year preceding the election, incumbents won 81 percent of the time in competitive elections; in districts where scores had declined, incumbents won only 69 percent of the time.

Citizens therefore did seem to base their assessment of incumbents on changes in test-score performance during a board member’s tenure, exactly the type of accountability many supporters of NCLB had hoped for.

We were interested to find that the average school test score for the precinct, rather than the district, had a significant effect on an incumbent’s vote share. The significant relationship with precinct test scores and the absence of a relationship with district scores suggests that voters were more concerned with school performance within their immediate neighborhood than across the district.

The Later Elections

With the evidence from 2000 in hand, we were initially surprised that all indications of a relationship between school performance and an incumbent school board member’s vote share vanished after the passage of NCLB in 2002.

We reanalyzed the data in a number of different ways, but were unable to find any indication that voters cast their ballots based on changes in test scores. We included administrative data from teacher, parent, and student ratings of local schools; we considered the potential relationship between vote share and test-score changes over the previous two or three years; we examined the deviation of precinct test scores from district means; we looked at changes in the percentage of students who received failing scores on the PACT; we evaluated the relationship between vote share and the percentage change in the percentile scores rather than the raw percentile point changes; and we turned to alternative measures of student achievement, such as SAT scores, exit exams, and graduation rates. None of these approaches yielded clear evidence of a link between school performance and voter behavior in school board elections.

Even when we estimated the probability that an incumbent won a majority of the votes in each precinct, or accounted for test-score changes and levels as a function of dollars spent on students, or measured the relationship between an incumbent’s vote share in one election and the previous election, the overwhelming weight of the evidence indicated that school board members were not being judged on improvement or weakening in school test scores.

Strategic Politicians

So far, we’ve discussed the experience of incumbents who ran against an opponent. Many incumbents, however, either did not run for reelection or ran unopposed. For example, in 2000, 42 of the 157 sitting board members in 39 school districts who were up for reelection did not run for office. Among the remaining 112 who sought to retain their seats, more than one-third, 45, did not face a challenger. The 67 incumbents who ran opposed in 2000 represented less than half of the sitting board members whose seats were in play that election.

School performance as measured by test scores may have helped determine which candidates sought reelection and which faced a challenger. If board members and potential challengers anticipate that voters will punish incumbents for poor school performance, declining test scores may lead board members to retire rather than endure defeat. A drop in test scores may also encourage opponents to run for office, either because they believe that incumbents are now vulnerable to defeat or because disgruntled citizens feel compelled to run for office when schools perform poorly.

Although exact election filing dates vary by school district, most candidates for seats on South Carolina’s school boards must decide whether to run by mid-September for a November election. PACT scores, however, are typically released to the public in late September or early October. Incumbents and potential challengers may not know the exact size of precinct or district test-score changes, but they could very well have impressions of the direction and rate of student learning trends. School board members and some challengers have observed the schools firsthand and have listened to accounts from principals and teachers. By monitoring the coverage of education issues on local television and in the print media, candidates may also have a sense of the extent to which voters are likely to use student test-score performance to evaluate candidates. And although we do not know this with any certainty, it is possible that school board members have access to test-score results before they are released to the public.

We decided to assess the relationship between test-score trends and incumbents’ decisions to run for reelection, and then to estimate the effect of test-score trends on the probability that an incumbent who runs faces an opponent. Our basic approach in this analysis was to compare the probability of running (or running and facing a challenger) between incumbents who oversaw districts with stronger and weaker year-over-year test scores. Because candidates either run for election in every precinct or do not run at all, we focused only on district test scores. As with our analysis of the relationship between test scores and vote share, we accounted for a number of factors that could reasonably influence a candidate’s decision to run for office. These included the incumbent’s vote share in the previous election, which might serve as a signal of the likelihood of victory to both the incumbent and potential challengers, and whether board members received compensation for their service, under the assumption that paid positions would be more attractive.

Our results indicate that incumbents may bow out in anticipation of being held accountable for poor test-score performance by schools in their district. During the 2000 election, incumbents were less likely to seek reelection when their district’s test scores declined over the preceding school year. If a district experienced a drop from the 75th to the 25th percentile of test-score change, our results lead us to expect that incumbents will be 13 percentage points less likely to run for reelection. In fact, 76 percent of incumbents sought reelection in districts with improving test scores; in districts with falling scores, only 66 percent did. The results did not hold for the later elections. Just as we found no evidence in the 2002 and 2004 elections that a large block of voters held incumbents accountable for poor test scores, we failed to find any indication that incumbents in 2002 and 2004 based their decisions about running for reelection on student learning trends.

ednext_20081_66_fig1When we looked at the behavior of the challengers, we once again saw evidence of their responding to test scores during the 2000 election, but no indication in 2002 or 2004 (see Figure 1). In 2000, a drop in test scores within the district significantly increased the likelihood an incumbent would face a challenger. If a district’s test-score change fell in the 25th rather than the 75th percentile, we estimate that an incumbent experienced an 18-percentage-point increase in the probability of facing a challenger. On the ground, the data show that 74 percent of incumbents who ran for reelection in districts with declining scores faced a challenger; in districts with improving scores, only 49 percent of incumbents faced a challenger.

What Happened in 2000?

Why did voters, incumbents, and potential challengers care about test scores in 2000 but not in 2002, or in 2004? The most likely explanation involves changes in media coverage of education issues. The amount and content of media coverage of student test scores differed substantially between 2000 and the latter two election years.

The 2000 elections were the first to follow the passage of the state’s accountability system. Journalists devoted ample space to issues that either directly or indirectly concerned student learning trends. Charleston’s Post and Courier, the Herald in Rock Hill, Columbia’s The State, and the Associated Press State & Local Wire, which serves numerous other South Carolina papers, regularly carried stories about the state of South Carolina’s schools. Both incumbents and challengers frequently identified student achievement generally, and test scores in particular, as the single most important issue in the 2000 school board election. Newspaper editorials that endorsed candidates in the 2000 election regularly underscored ways in which individual incumbents and challengers did, or said they would, improve student achievement. And 45 percent of the newspaper articles about school board races in the two months prior to the election mentioned student test scores.

In the 2002 and 2004 elections, however, media coverage shifted to other issues, such as the closing of schools, the racial composition of schools and boards, disciplinary problems, and sports programs. In these years, only 30 and 34 percent of articles, respectively, touched on test scores. The decline in media attention leads us to suspect that concerns about student learning trends probably did not stand at the forefront of voters’ or candidates’ thinking in the 2002 and 2004 elections.

The tone of articles about the state’s accountability system also shifted drastically during the 2002 and 2004 election cycles. From 1998 to 2000, most stories adopted a fairly neutral tone, introducing the public to the new accountability system and offering tepid praise and criticism of the testing regimen. After the 2000 election, journalists portrayed considerably more skepticism in their coverage of student achievement trends. Reporters devoted stories to errors in PACT’s scoring, security breaches in school testing, flaws in the science and social studies portions of PACT, district efforts to get ahead by changing their test dates, confusion regarding the comparability of test scores over time, missing PACT scores, and conflicts between school evaluations under the state and national accountability systems.

At the same time that administrative irregularities and mishaps attracted public scrutiny, teachers, district officials, and various other interest groups began to challenge the value of standardized tests more generally. One 3rd-grade teacher was quoted as saying, “These tests cannot and never will truly measure what a child actually knows, how a child sees the world, what a child genuinely understands and grasps, and what kind of life that child lives outside the school walls.” A school district associate superintendent claimed, “The problem with PACT is it doesn’t tell you what your child knows and doesn’t know.” The Palmetto State Teachers Association questioned the value of the state’s testing regimen, noting on its web site, “The current statewide tests do not provide immediate diagnostic information needed to improve student achievement or provide information to help teachers plan to meet the needs of each student. The testing process is time consuming, and spending weeks on high-stake testing is NOT in the best interest of children.” And as Andrew HaLevi, the Charlestown County School District 2000 Teacher of the Year, wrote in a 2001 op-ed for the Post and Courier, “The PACT needs to be seen for what it is: a vehicle for politicians to say that they are tough on education (and educators). This may make for good politics, but it makes for bad educational policy.” Reacting to the rising criticisms directed toward PACT, voters may have grown disenchanted with the state’s accountability system and removed test-score performance from among the criteria on which they evaluated school board candidates.

There are, of course, several other plausible explanations for why South Carolinians voted based on test score performance in 2000 but not in 2002 and 2004. The timing of the public release of the test scores is one. The 2000 scores were released in late October, whereas scores in 2002 and 2004 were released in early October and early September, respectively. In 2000, the release of scores so close to the election date and the media coverage that followed may have primed voters to evaluate candidates on student test scores. In the other two election years, the gap of a month or two between the release of scores and election day may have allowed the issue of test scores to fade from voters’ minds.

Another possibility is a major change in the reporting of test information. NCLB requires schools to notify parents directly about the performance of their schools. In 1999 and 2000, the first two years of PACT testing, scores were reported in their raw form in the materials that parents received. Beginning in 2001, official PACT reports to parents used a simpler rating scale that classified each school into one of five performance categories ranging from unsatisfactory to excellent. Under this scheme, almost every school received a rating of at least average. Indeed, a Department of Education news release in 2002 ran with the headline, “Schools receive higher Absolute ratings on report cards; 80% average or better.” Although the raw scores were contained deeper in the reports, if most schools appeared to be average or better, parents may not have been prompted to hold incumbents accountable for poor school performance. Incumbents and potential challengers may also have become less responsive to scores when the testing regimen began to give nearly every school a passing mark.

Implications for Policy

The evidence from South Carolina shows that voters do at least sometimes evaluate school board members on the basis of student learning trends as measured by average school test scores. Changes in average school test scores from year to year can affect the number of votes incumbents receive, the probabilities that they run for reelection, and the likelihood that they face competition when they do.

But the absence of a relationship between average school test scores and incumbents’ electoral fortunes in the 2002 and 2004 school board elections raises important questions about the assumptions underlying accountability systems. School board elections give the public the leverage to improve their schools. If voters do not cast out incumbents when local school performance is poor, they forfeit that opportunity. As debate continues over components of NCLB, policymakers should consider whether it is realistic to assume voters will in fact use the polls to drive school improvement.

Christopher R. Berry is assistant professor at the Harris School of Public Policy Studies at the University of Chicago, where William G. Howell is associate professor.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College