I got a number of notes regarding yesterday’s post, mostly either dinging me for my concerns about value-added systems or asking how I can raise such concerns and still write, “Value-added does tell us something useful and I’m in favor of integrating it into evaluation and pay decisions, accordingly.” Let me clarify. I think that two things are both true:
First, teachers vary widely in ability and performance, and many people teaching today probably shouldn’t be.
Second, teaching is complex, and no simple score or algorithm usefully captures that variation in ability and performance, or reveals which teachers shouldn’t be teaching.
(Oh, and a third relevant premise is that teacher education programs and school districts generally do a mediocre job of preparing educators and a pretty awful job of screening out lousy educators.)
Together, these premises argue for systems that aim to evaluate, recognize, and remove teachers based on performance, but that do so while respecting the bluntness of various measures. Today’s value-added metrics may be, as I wrote, “at best, a pale measure of teacher quality,” but they tell us something. Structured observation tells us something. Peer feedback tells us something, as does blinded, forced-rank evaluations by peers. Principal judgment, especially in a world of increasing accountability and transparency, tells us something. Well-run firms and nonprofits use these kinds of tools, in various ways, depending on their culture and workforce.
This is why I believe value-added metrics should be one useful component, but that “I worry when it becomes the foundation upon which everything else is constructed.” My quarrel is not with value-added, but with the assumption that we can and should gauge the validity and utility of all other measures against today’s math and ELA value-added results.
Now, don’t give me too much credit. I trust that few RHSU readers will mistake my concerns for squeamishness or kind-heartedness. Any evaluation system will entail some misidentification. Some individuals will be unfairly terminated. That’s the way of the world, and I can live with that. I’m not worried about imperfections and I’m not holding out hope for a perfectly “fair” system; I’m just concerned about enshrining a narrow, stifling, and incomplete notion of good teaching as the benchmark. I want a system which champions and rewards a robust vision of good teaching and that doesn’t settle for a narrow, distorted conception because that’s what econometricians can measure.
Firms and nonprofits use a variety of evaluative tools which identify a nontrivial number of employees as low-performing. That’s kind of the point of the exercise. Especially in education, where I fear there is remarkably little front-end quality control, it is entirely appropriate that any system of evaluation should be routinely identifying teachers as low-performing and remediating or terminating them. The mistake is imagining that we can or should do this almost entirely through a reliance on value-added or its proxies.
-Frederick Hess