Colorado’s Plan for Rating Schools Gets the Fundamentals Right



By 04/03/2017

Print | NO PDF |

So far, watching state ESSA plans roll in has been a bit like rooting for the Washington Redskins (or, if you prefer, the Washington Football Team). Every fall starts with fresh hopes. Yet every spring fans are asking the same questions: What went wrong? Why can’t management learn from its mistakes? Why does it always have to be this way?

Mark Gstohl/Flickr

Meanwhile, Broncos fans have enjoyed John Elway, Peyton Manning, and the second most Super Bowl appearances in NFL history.

Obviously, building high-performing education system is harder than building a winning football team. But as in football (or any sport, really) it helps to focus on the fundamentals in education policy because you won’t get far without them.

So with that in mind, here are four ways that Colorado’s plan for rating schools, like its annoyingly successful football team, gets the fundamentals right:

1. Colorado uses a mean scale score as its measure of achievement.

Instead of using proficiency rates to gauge achievement, Colorado will take an average of students’ test scores, which sounds simple (like blocking and tackling) because it is simple—assuming you do it.

As Morgan Polikoff and other accountability scholars have argued, “a narrow focus on proficiency rates incentivizes schools to focus on those students near the proficiency cut score, while an approach that takes into account all levels of performance incentivizes a focus on all students.” ESSA requires that states measure “proficiency,” which literally means “a high degree of competence or skill.” But it doesn’t say anything about proficiency rates, so there’s nothing to prevent states from adopting a broader measure.

Unfortunately, when it comes to this issue, Colorado is still in the minority because many states have yet to move beyond their NCLB-era obsession with proficiency rates. What does it say about education reformers that it’s taken us a decade to wrap our heads around the concept of an average?

2. Colorado uses a true growth model.

Instead of a “growth-to-proficiency” model or some other contrivance, Colorado uses a bona fide growth model to gauge the progress a school is making with students. Specifically, it uses a “student growth percentile” model, which compares the progress of each student at a school to the progress of similar students at other schools and then assigns the student a “percentile rank” between zero and ninety-nine based on how his or her progress stacks up.

The advantage of this approach is that it is grounded in reality rather than the fantasies of policymakers or reformers. Instead of trying to specify the amount of progress students should make based on some utopian ideal, it rewards or sanctions schools for making more (or less) progress than one might expect under the circumstances.

If you doubt the wisdom of this approach, just ask yourself this: Which is the better measure of a running back’s contribution—touchdowns or yards per carry? Would you cut Barry Sanders because he wasn’t scoring on every play? Or would you try to get the ball in his hands more often?

3. Colorado assigns more weight to growth than achievement.

According to its draft plan, Colorado hasn’t finalized its weighting system yet. But the draft does cite the state’s weights from 2016:

In 2016, for elementary and middle schools 40% of points came from Academic Achievement measures and 60% from Academic Growth measures, while for high school the weighting was 30% Academic Achievement, 40% Academic Growth, and 30% Postsecondary and Workforce Readiness. Once the Colorado State Board of Education decides on the relative weights between indicators, CDE will update the state plan with this information.

I’m making a bit of a leap here, but to me this passage suggests that Colorado will continue favoring growth over achievement (unlike most states). Hopefully that’s the case, because right now Colorado is one of the few states with a school rating system that isn’t the accountability equivalent of the old-school T formation. For some reason, even though we’ve known for decades that different schools face different challenges, only a few states have embraced this insight by creating systems that judge schools based on things they control. No wonder we haven’t seen as much progress as we’d like.

4. Colorado protects the football

I was tempted to put this first. However, for the sake of D.C.’s long-suffering football fans, I decided to bury it below the fold.

Obviously, ESSA gives states an opportunity to experiment with new measures, which is great. But without a little practice, there are lots of ways for this to turn into a fumble. And the fact so many states are still assigning the same weight to (dumb) proficiency rates and the outputs of (smart) growth models doesn’t inspire much confidence in their ability to innovate.

From the text of Colorado’s plan, it’s clear that the state takes a hardheaded approach to ensuring the validity and reliability of new measures, such as chronic absenteeism, which is reassuring under the circumstances.

Pretty much everyone wants to get beyond test scores. But that’s easier said than done, so it’s best to take it slow and “protect the football.” Let’s start by fixing what’s obviously broken, and then move on to the hard stuff.

***

Unless you still believe in holding schools accountable for things they can’t control—and in those bold timelines politicians and bureaucrats are so fond of concocting—a school rating system like Colorado’s should suit you. After all, if you believe in top-down accountability, it will point you toward those schools that are truly failing their students. And if you believe in bottom-up accountability, it will point parents toward those schools where kids are making the most progress.

Either way, there’s nothing wrong with keeping things simple and focusing on the basics. Ask the pros: They’ll tell you that’s how most championships are won.

— David Griffith

David Griffith is a Research and Policy Associate at the Thomas B. Fordham Institute.

This first appeared on Flypaper.




Sponsored Results
Sponsored by

Harvard Kennedy School Program on Educational Policy and Governance

Sponsored by