Did the Chetty Teacher Effectiveness Study Use Data that are No Longer Relevant?

In a two steps forward, one step back dance worthy of Vladimir Lenin himself, the New York Times properly gave front-page coverage to the breathtaking new teacher effectiveness study by Raj Chetty and his colleagues, but then allowed Michael Winerip space to give teacher unions a denial opportunity.

The Chetty study shows that over a ten year period, the payoff for the students of a very effective teacher amounts to a total of $2.5 million. The harm done by a very ineffective teacher is the same. So if we could replace a terrible teacher with a great one, it would be worth $5 million total for all those kids affected by the switch.  And losing a great teacher, only to hire a bad one, would cost the same.   That’s convincing evidence for those who want to limit the tenure of non-performing teachers while giving the excellent ones their just reward.

But unions want to protect teacher tenure and pay all teachers the same, regardless of effectiveness.  So denying the Chetty study is absolutely crucial.

Though he lacks the necessary econometric skills, Michael Winerip takes up the assignment, claiming the data on teacher effectiveness, which comes from student testing during the 1990s, is too old to tell us anything.

But to ascertain the impact of teaching on student earnings that occur much later in life, it is of course necessary to look at those educated in the 1990s.   Those students have now finished high school (or not), gone to college (or not), and entered the work force (or not).  For today’s students, no one has that information–for the obvious reason that they are still too young.

Aha! says Mr. Winerip. That is the fatal flaw. Back in the 1990s, when students took standardized tests, No Child Left Behind did not exist, so “whether those results are applicable to our post-2004 high-stakes world, we cannot tell.”

If we are to buy this argument, the data will always be too old to tell us anything.  To learn what works we have to wait twenty years, and when that data is available, it will be just too old.

But is it?  Why should we assume that the tests taken back in the 1990s were more accurate than the post-NCLB tests given in 2005, when both teachers and students took them more seriously.  Student performance is more accurately measured when students take a test seriously and when teachers make sure the students understand the testing procedures to be followed. All that is more likely when tests count for something.

So if Chetty and his colleagues could identify large impacts of effective teaching using data from the 1990s, his successors will probably find even larger impacts from more accurate information gathered in the first decade of the 21st century.

Of course, I cannot prove that, but it is certainly more likely than Winerip’s counter-hypothesis.  While he admits the 1990s tests were accurate, he claims tests today no longer are.  Only if Winerip is willing to make the astounding claim that most teachers today are cheating deliberately and systematically does that assertion hold. Otherwise, we can characterize his argument in one word:  Silly.

– Paul Peterson

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College