A Scholarly Approach to School Accountability



By and 06/23/2016

Print | NO PDF |

Though it sometimes appears that Education Secretary John King didn’t get the memo, the Every Student Succeeds Act (ESSA) represents a significant devolution of authority from the federal government to the states. This is a praiseworthy development that, in our view, better fits America’s constitutional principles of federalism and opens up many areas of education policy for innovation and improvement.

twenty20.com

twenty20.com

That devolution includes the heart of ESSA: school-level accountability. States now enjoy a freer hand to decide how they want to rate (or “grade”) their schools and determine which are worthy of either praise or aggressive intervention. The new law doesn’t give states carte blanche; they can’t move away from student achievement as a major indicator of quality, for example. But they certainly have more leeway than under No Child Left Behind.

So what forms might—and should—this take? How might states approach the particular challenge of redesigning their accountability systems? The contestants in our “accountability design competition” in February surfaced ideas aplenty and made many promising suggestions. With a few months of reflection on them, we see that there are competing camps or worldviews when it comes to ESSA accountability (much as there are regarding school choice). We see four such factions forming. Let’s identify them by their slogans:

1. Every School is A-OK!
2. Attack the Algorithms
3. Living in the Scholars’ Paradise
4. NCLB Was Extended, Not Ended

Let’s take a look.

Every School is A-OK!

Proponents of this model—the teachers’ unions and other educator groups—fundamentally abhor results-based accountability. They hate it when state officials give their schools black eyes or low marks for not meeting targets that they view as arbitrary and beyond their control. They’d rather get rid of testing and accountability altogether, but since they can’t quite pull that off, they want to at least create a system that depicts schools in the best possible light. Look for them to push for systems in which schools could get good ratings for either high proficiency rates or strong growth; to embrace squishy “other indicators of student success or school quality” (such as “teacher engagement”) and make those indicators count for as much as possible; and to lobby for school categories that all sound positive. (Nebraska’s performance levels might be a model: Cornhusker schools can earn ratings of Excellent, Great, Good, or Needs Improvement.)

Attack the Algorithms

This approach is also skeptical of test scores, but not of accountability per se. It seeks a system that uses as much human judgment as possible and captures a full, vivid, multifaceted picture of school quality. (Indeed, it resembles what’s called “qualitative research” at AERA conferences.) At its core is the school inspection: Experts visit schools to conduct stakeholder interviews, observe classrooms, administer surveys, and more. The results of such inspections would count for as much as ESSA allows. Systems like this are rare in this country, but they’re a significant part of education accountability in some European countries and much of the British Commonwealth. And they aren’t unlike what the best charter authorizers already do when their schools are up for renewal.

Living in the Scholars’ Paradise

This approach uses sophisticated, rigorous models to evaluate schools’ impact on student achievement, making sure not to conflate factors (like student demographics or prior achievement) that are outside of schools’ control. It also seeks to avoid the perverse incentives that were baked into NCLB, especially a narrow focus on “bubble kids” just above or below the proficiency line. This would result in a system that is maximally fair, and it encourages schools to help all students make as much progress as possible over the course of the school year. The Scholars’ Paradise model would use “scale scores” or a “performance index” for the “academic achievement” indicator; measure growth using a two-step value-added metric; pick robust “indicators of student success or school quality,” such as chronic absenteeism; and make value added count the most in a school’s final score. (If you think that sounds a lot like the model proposed by uber-scholar Morgan Polikoff and his colleagues at our ESSA Accountability Design Competition, you are not mistaken.)

NCLB Extended, Not Ended

NCLB is gone but not forgotten. Or maybe it’s not exactly gone, in the mind of folks who yearn for Uncle Sam to mandate accountability models that obsess about achievement gaps and give failing grades to any school with low proficiency rates for any subgroups. Under the NCLB Extended approach, embraced by many on the education reform/civil rights Left, achievement would continue to be measured by proficiency rates alone (with rising annual goals for what is good enough); growth data would be used sparingly and/or focused on “growth to proficiency”; “other indicators of student success or school quality” would be minimized; and evidence of achievement gaps would sink schools’ ratings significantly. NCLB rides again.

***

In our view, Scholars’ Paradise has a lot going for it. Its focus on fairness should mean greater buy-in from educators; its ability to differentiate between high-growth and low-growth schools makes it effective at signaling to policy makers which campuses deserve praise and which need major overhauls. And it focuses equally on all kids regardless of their achievement levels. That also seems like the fairest and smartest approach to us. Yes, it’s wonky—maybe too wonky to be understood by most parents, educators, and policy makers. But we don’t know how our smart phones work either; that doesn’t mean we don’t love them.

Unfortunately, John King’s proposed regulations would make parts of this model illegal. That’s because our friends at the Department of Education read ESSA’s language to mean that proficiency rates—and proficiency rates alone—must be the sole measure of “academic achievement.” We believe that the department’s famously smart lawyers could find plenty of wiggle room and allow states to use an Ohio-style index to give partial credit to schools for getting kids to basic (and additional credit for getting them to advanced). Here’s hoping they wiggle before these regulations are finalized.

Attack the Algorithms also shows promise. This approach puts stock in people’s ability to identify and adjust for nuances in ways that quantitative models can’t. For it to work, however, states would have to ensure that inspectors are thoroughly trained, highly competent, impeccably impartial, and willing to differentiate between high- and low-performing schools. To be permissible under ESSA, they would also need to report findings for schools’ subgroups, not just the schools as a whole. But if we all agree that it’s insane to measure teachers based on test scores alone, why should we keep doing that for schools?

You can probably tell by now that we’re not so bullish on the other two approaches. “Every School is A-OK” is simply not truthful. Schools with low growth and low achievement, year after year, are not OK; they are brain-dead and in need of resuscitation or euthanasia. Mediocre suburban schools are not OK, and their “stakeholders” should know it. It’s clear that Secretary King and colleagues feel the same way; they are now working via regulation to bar this model.

On the other end of the spectrum, NCLB Extended would amplify the many problems that ESSA was meant to overcome: the utopian expectations, the sense that we’re setting up every school to fail, the narrow-minded focus on reading and math scores and kids just below or above the “proficiency” line. Turn the page, people. Turn the page.

***

The first state plans for implementing ESSA accountability are due in March. In the meantime, everyone should start designing their ESSA Accountability Camp t-shirts; the tug-of-war is about to begin!

—Michael J. Petrilli and Brandon L. Wright

This post first appeared on Flypaper.




Sponsored Results
Sponsored by

Harvard Kennedy School Program on Educational Policy and Governance

Sponsored by