Is It Really Possible That Professional Development Doesn’t Work?



By 10/08/2015

1 Comment | Print | NO PDF |

TNTP’s new report, “The Mirage,” is essential reading for anyone interested in educator effectiveness. It’s smartly researched and delivers an uppercut of a conclusion: Today’s professional development doesn’t work.

There’s just one small problem. I’m not sure I believe it.

To trust its findings would mean admitting that we’ve wasted hundreds of billions of dollars. It would mean we’ve misled millions of educators and families about improving the profession. It would mean a load-bearing wall of the Race-to-the-Top and ESEA-waiver talent architecture is made of sand. All of this would be hard to swallow, but I suppose it’s possible.

But to accept and act on these findings would mean putting our full faith in today’s approach to evaluating educator effectiveness. It would mean believing generations of schools, school systems, PD providers, institutions of higher education, and parents were wrong when it comes to assessing and improving teacher performance. For me, this is a bridge too far.

The study encompassed four large school operators and surveyed thousands of educators. It used multiple measures to assess teacher effectiveness and tried to find variables that influenced whether a teacher improved (things like “growth mindset,” school culture, and access to different types of PD).

Some of the findings are staggering. The districts spent about $18,000 per teacher, per year on development. This amounts to nineteen full school days of PD annually. Despite all of this, “most teachers do not appear to improve substantially from year to year.” The average veteran teacher (ten years of experience or beyond) “has a growth rate barely above zero.”

Moreover, the researchers found no commonalities that distinguished teachers who did improve from those who didn’t. “We looked at dozens of variables…every development strategy, no matter how intensive, seems to be the equivalent of a coin flip.”

The measures of effectiveness seem to be wholly untethered to teachers’ self-assessments. Despite wide variations in assessed performance, more than 80 percent of teachers gave themselves a four or five on a five-point scale. More than 60 percent of teachers found to be low-performing rated themselves as a four or five. Among teachers whose classroom practice was found to have declined, the vast majority still said their instruction had improved.

If TNTP is right, we should be beside ourselves. We’re spending billions, most teachers aren’t seeing their performance rise, and we have no idea why improving teachers are getting better. If TNTP is right, teachers aren’t getting better like they think they are.

If TNTP is right, a major federal push seems terribly unfair. The teacher evaluation reforms encouraged by RTTT and ESEA waivers were sold with promises that they weren’t meant to punish teachers, but instead as a means to help them improve. Now we have state laws with tough consequences for teachers who persistently underperform, but we’re saying, “Oops, we actually don’t know how to help you get better.”

If TNTP is right, this would be like dystopian YA lit meets education policy—bleak as the day is long.

Maybe I’m whistling past the graveyard or just obstinately refusing to accept evidence. But it seems implausible to me that our systems of developing educators have had virtually no utility. So I want to offer an alternative hypothesis. My point is not to defend PD. It’s to question how we’re assessing educators.

To be clear, I think that for way too long, our systems for evaluating teachers were primitive, poorly implemented, too detached from student performance, and warped by policies that disincentivized critical ratings. I believe that the last several years of reforms have moved us in the right direction; I’m a supporter of new observation rubrics, student surveys, SLOs, VAMs/SGPs, and other innovative ways of triangulating teacher performance. But I also believe that we still have miles to go.

I’m of the mind that we’re still not fully or fairly articulating—at least in the policy world—what it means to be a great and improving teacher. So my inclination is to rely (probably more heavily than my reform-oriented friends) on the accumulated wisdom reflected in current practice. That means I’m skeptical when any organization, even one that I respect as much as TNTP, argues that longstanding practice is misguided.

My immediate reaction after reading “The Mirage” was to advocate for a total realignment of PD. But now I’m not so sure, because I don’t think we’re clear about the target at which it should be aimed—what it means to be that great, improving educator. My view is that we still have lots of work to do here.

For example, many of us increasingly believe that teaching “grit” is invaluable, but the leading researcher wants to pump the brakes on how it’s measured and tied to teachers. This very good piece by Peter Greene argues that since public schools emanate from communities, each community should have a say in what effective teaching in its schools looks like. Robert Pondiscio and Kate Stringer recently made a compelling case about the civic role of schools, which is seldom discussed in the context of educator evaluations. Some schools seem to be fostering social capital inside and outside their walls, but we’re still not sure how. The “Moneyball for Education” project argues there are good measures we’re not using and probably even better measures we haven’t thought of yet.

In short, I’m wondering if important elements of great teaching and continuous improvement are found in today’s PD but are not captured by our evaluation systems.

Maybe the real mirage is today’s too-confident definition of “highly effective teacher.”

– Andy Smarick

This first appeared on Flypaper.

Dan Weisberg of TNTP responds to this article here.




Comment on this article
  • Helen Hoffman says:

    As you admit, we have no good definition of a “highly effective teacher”, and trying to make that definition quantifiable is, frankly, a fool’s errand. Because we can’t measure the right things. “Value added” is a joke if all it means that some standardized scores improved. Did the students actually learn anything? Did what they learn have any real “value” beyond the resulting test score? Did the student learn to enjoy learning? Was it a back school year because of things outside teacher control? (In fact did the teacher, despite a poor evaluation, do heroic work that was simply not recognized?)

    We don’t and we can’t know. Our instruments for evaluating teachers are flawed, ask the wrong questions, and try to quantify what can’t be quantified.

    That doesn’t make PD effective or ineffective. Maybe we are asking the wrong questions of it too. Not did it affect test and evaluation scores, but did it change something for the better?

  • Comment on this Article

    Name ()


    *

         1 Comment
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors