Quick Thoughts on the Screwed Up DC IMPACT Ratings



By 12/27/2013

1 Comment | Print | NO PDF |

Usually big edu-news doesn’t break during Christmas week.  But, on Monday, DC Public Schools officials announced some troubling news concerning their acclaimed IMPACT teacher evaluation program. As the Washington Post’s savvy Nick Anderson reported,”Faulty calculations of the ‘value’ that D.C. teachers added to student achievement in the last school year resulted in erroneous performance evaluations for 44 teachers, including one who was fired because of a low rating.”

In response, Elizabeth Davis, president of the Washington Teachers’ Union, said, “IMPACT needs to be reevaluated. The idea of attaching test scores to a teacher’s evaluation — that idea needs to be junked.” DCPS chief of human capital Jason Kamras said, “We take these kind of things extremely seriously. Any mistake is unacceptable to us.”

First, here’s a bit on what DCPS had to say. In an e-mail, Kamras wrote, “We (DCPS) made no mistakes. Our vendor, Mathematica, incorrectly calculated the value-added scores” due to “a programming error.” He said the teachers involved constitute “about 1% of the [DCPS] teacher force,” with 22 of the flawed ratings too high and 22 too low. Kamras said, “We are holding harmless anyone who should have had a lower rating [and] we are moving up anyone who should have had a higher rating. Those folks will get all the benefits they’re entitled to.” He said “only 1 person was erroneously fired” and “we’ve already offered” to reinstate them.

As we head into 2014, with lots of states and districts rolling out or amping up new teacher evaluation systems, there are at least four points worth keeping in mind.

One, this should be a huge caution flag for teacher evaluation enthusiasts. Remember, IMPACT is the gold standard for teacher evaluation and pay.  DCPS has taken the time, money, talent, and discipline to really go after the data and technology issues.  Even so, DC has stumbled. In other places, I’m struck at how often I chat with district or state leaders whose take on the number-crunching, vendors, and data systems seems to be, “Hey, the experts will take care of that technical stuff.” (I’ve come to think of this as the HealthCare.gov response.)  As we’ve seen with health care reform, that can be a mistake. Folks better be bucking up.

Two, let’s maintain a sense of proportion. Remember that Tom Dee and Jim Wyckoff released an influential NBER study just this fall, which found the system seems to be working pretty much as intended. To the extent that we view reading and math gain scores as one element of teacher effectiveness, the system seems to be both keeping the right teachers and helping teachers improve.

Three, these systems are nascent and fragile. Proponents need to do everything they can to show that these will be fair, reliable, and workable.  After all, unions are going to be well within their rights to bring legal challenges, especially when there are concerns that systems are inequitable or unreliable.  Good intentions and complex algorithms won’t suffice to trump those concerns.  That’ll be a matter of how these systems are designed and executed. And the inconvenient truth is that it’s not a question of whether the results are more right than wrong–it’s a question of whether the flaws are minor and defensible enough to pass muster in the courts of law and of public opinion.

Four, on this count, DCPS’s response should help. After all, some mistakes are going to be inevitable.  An important consideration is how to respond to them. DCPS here has seemingly done a responsible job of ‘fessing up, coming clean, and moving to clean up its mess.  Let’s hope that’s the norm for districts and states which misstep on teacher evaluation in the year ahead.

-Rick Hess

This first appeared on Rick Hess Straight Up.




Comment on this article
  • edgeek says:

    As you indicate, we all must become better consumers of data in education. We’ve come to understand some of the basic ramifications of standardized testing results, but even with a decade or more of this type of data in many places, I do not see the level of ownership that is necessary to appropriately use, criticize, and continuously improve these systems. Now as districts and states across the nation increase their uses of qualitative and quantitative data for evaluation purposes, the stakes are higher in many cases and the ownership remains at arm’s distance. Districts must ensure that they have staff who are not only ensuring proper procurement of vendor contracts, but also analysts capable of doing some cross-validation and review of results. This is also true of observation results, which we can only assume have at least the same rate of error, but for which we cannot easily “check” results without inquiring into every observation and rating process. There is always a level of risk involved with any complex analysis, and with a minority of participants receiving reward or punishment (the outliers), special care should be taken to regularly review the validity of the points of data (value-added, observation, survey, and other) that feed into a multiple measures system. Teachers and other school-level staff should also take ownership over and understand their data deeply in order to make their best evidence-based cases during evaluation and to be empowered to provide specific and targeted feedback to these systems.

  • Comment on this Article

    Name ()


    *

         1 Comment
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors