My Response to Marc Tucker’s Defense of Surpassing Shanghai

In a reply posted on the Ed Next blog that is longer than my original review of his book, Surpassing Shanghai, Marc Tucker throws quite a bit of dust in the air – more than I can address in this brief response – but one thing remains perfectly clear: Marc Tucker does not understand basic principles of research design.  The “best practices” method that is gaining popularity among more-impressionable education policy wonks and that Tucker used in Surpassing Shanghai simply cannot support causal claims about “what works.”

The fundamental problem is that “best practices” analyses lack variation in the dependent variable – they only examine in detail successful organizations or countries – so they can’t link particular practices or policies to success.  To make such a link they would need to observe that the presence or absence of those practices or policies is related to the presence or absence of success.  If they only look at successful organizations, then they can’t know whether they would have been less (or more) successful had they not adopted a particular policy or practice.  They also do not rule out the possibility that others who have adopted the “best practices” do so without success.

But Tucker claims that he didn’t only look at successful countries because “the strategy we used was to compare the top performing countries to the United States.”  Making (mostly implicit) comparisons to the United States does not solve the problem.  Again, without considering a broad spectrum of successful and unsuccessful countries it is impossible to attribute the superior performance of another country to any particular policy or practice.

There are many things that are different between the U.S. and Shanghai, Finland, Japan, Singapore, and Canada.  How can Tucker or anyone know which differences caused the superior performance?  Tucker just picks and chooses the policies and practices he favors, ignoring that his recommendations are not even universally present in the handful of successful places he examines.  And by limiting variation in the dependent variable to exclude places that perform worse than the United States, Tucker is unable to discover whether lower-achieving countries are also employing the practices and policies he recommends, which would debunk his claim of having found the formula for success.

I’m far from being the only one who is aware of the problems with Tucker’s method of “selection on the dependent variable.”  Virtually every introductory text on research design warns readers not to do as Tucker and other best practices enthusiasts do when they focus only on successful organizations or countries.  For example, Gary King, Robert Keohane, and Sidney Verba, in their classic Designing Social Inquiry, make the point emphatically:

That brings us to a basic and obvious rule: selection should allow for the possibility of at least some variation on the dependent variable. This point seems so obvious that we would think it hardly needs to be mentioned. How can we explain variations on a dependent variable if it does not vary? Unfortunately, the literature is full of work that makes just this mistake of failing to let the dependent variable vary…. The cases of extreme selection bias—where there is by design no variation on the dependent variable—are easy to deal with: avoid them! We will not learn about causal effects from them.

In my review I recommend analyses of international policies and practices done by Eric Hanushek, Ludger Woessmann, Martin West, Michael Kremer, Karthik Muralidharan and Charles Glenn because, unlike Tucker and other “best practices” gurus,  they avoid the error of selection on the dependent variable by considering the full range of outcomes, not just focusing on successful places.

Tucker is apparently unable to understand the difference between what he and these reputable researchers do when he mistakenly declares:

Greene appears to realize that his war on “best practices” has led him to inadvertently attack the kinds of studies done by people whose policy prescriptions he prefers, like Ludger Woessmann and Eric Hanushek, who have done well-regarded statistical analyses of survey data from OECD-PISA and other sources…. So, in the end, all the methods we used meet with Jay Greene’s approval.  It is only our conclusions that are odious.

Tucker’s inability to understand the difference and his dismissal of the selection on dependent variable criticism as “highfalutin language” is just plain embarrassing.  It’s not so much embarrassing for him, since he appears to be proud in his ignorance, as it is embarrassing for the Gates Foundation that pays for his work and the supporters of Common Core who rely on Tucker as one of their principal architects and advocates.

There is a cynical habit in the education policy world to fund and promote analyses that people know or should know to be faulty as long as those analyses advance their cause.  Shaming those who engage in this cynical practice by revealing the obvious flaws in Tucker’s work was the purpose of my review.  I fear that it will not end the use of “best practices” in education, but I hope it will exact a price for those who engage in such hucksterism.

-Jay Greene

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College