My Response to Jay Greene’s “Best Practices are the Worst”

Jay Greene’s review of Surpassing Shanghai in Education Next was not so much a review as a hatchet job.  Unhappy with our conclusions, he chooses not to debate them, but to savagely attack our goals, our methods and me personally.

Greene derides our goal of identifying “best practices,” that is, the policies and practices that have enabled the students in an increasing number of countries to surpass student achievement in the United States.  He seems to suggest that is a fool’s errand, undertaken only by industry gurus like Tom Peters and Jim Collins in the business community.  It is obvious to him that this is a form of “quackery.”  The evidence he offers is that some of the firms that Peters and Collins identified as top performers subsequently failed.

Firms rise and fall.  Only a handful of the firms in the Dow Jones Industrial Average fifty years ago are in it today, and many don’t exist any more.  But that hardly means they were not once great or that firms today have nothing to learn from other firms that are eating their lunch now in the same market they serve.

Quite the contrary.  When the Japanese attacked American manufacturers in the late 1970s, many American firms went out of business in the face of superior manufacturing methods.  Most of those that survived did so, in part, because they took their challengers seriously and studied their methods in detail.  They studied their “best practices.”  They did it with industrial benchmarking, the method we have used.  I would like Jay Greene to explain to all of us why this method, which proved so successful in helping to restore American manufacturing to its leading position in the 1980s, should be derided when it comes to restoring American education to its former world-leading status.

In our book, we point out that the research methods, most valued by American researchers, which involve the random assignment of research subjects to “treatments,” cannot be used when researching entire national education systems, because it is not possible to randomly assign national populations to the national education systems of other countries.  Oh yes they can, says Greene, and he points to the work of Karthik Muralidharan and Michael Kremer.  Well, we engaged Muralidharan to accompany us on our three-week-long benchmarking research in India and I know his work well.  He is best known for his own research in that country, in which he looks at the widespread implementation of a program to provide a form of private schools to the children of impoverished rural farmers.  It turns out that these schools are more effective than the public schools they replace, partly because the teachers in the public schools rarely show up for work and partly because more teachers can be purchased for the same amount of money.  Interesting, but irrelevant to the argument at hand.  No one in his right mind would characterize this program as an entire national education system.  Not for the first time, Greene grossly mischaracterized the evidence in order to make his point.

Greene not only attacks the methods used in the chapters in each country in our book, but he then goes on to announce that the conclusions drawn in the last chapter have almost nothing to do with the preceding chapters.  He offers two pieces of evidence for this outrageous assertion.

One is  Kai-ming Cheng’s observation in his chapter on the Shanghai system in which he describes how a certain number of slots in key schools in Shanghai are set aside for students from outside that schools’ enrollment area who can choose that school if they wish.  But I learned from our own benchmarking in Shanghai that those slots are sold to parents and the poorer their children’s performance in their sending school, the more the receiving school charges.  This system was not designed to facilitate school choice nor was it designed to improve student performance.  It was designed to enable formerly elite schools serving members of the Communist Party to stay afloat as they are decommissioned as key elite schools.  That is why I did not include it in my list of strategies in wide use in countries that are outperforming the United States.

The other piece of evidence that Greene offers for his assertion that my analysis and summary ignored the work of the chapter authors in the book is that I ignored what they had to say about decentralization of decision-making in these systems.  But that is not true.  What I describe is a process that many others have observed.  The top-performing countries have centralized the setting of goals, the setting of standards and the measurement of student achievement, and relaxed their control over the way schools choose to get their students to high standards.  Over time, as they have succeeded in raising the quality of their teaching forces, they have started to relax the degree to which they specify their standards and curriculum, moving from a bureaucratic form of accountability to a more professional form of accountability.  This whole process cannot be accurately described as a process of either centralization or decentralization.  It is much more accurately described as a process of professionalizing the teaching force, a point that is made repeatedly in Surpassing Shanghai.

If Greene was right, and I ignored the chapter authors’ presentation of the facts when writing my analysis and summary, you could reasonably expect that they would be, to say the least, annoyed.  But, in fact, I did what any editor and summarizer could be expected to do: I shared my draft analysis and summary of the chapters with my fellow chapter authors, who seemed, on the whole, quite satisfied that I captured the essence of their findings.

After denouncing the “best practices” identified by the authors of Surpassing Shanghai on the basis of the methods we used, Greene appears to realize that his war on “best practices” has led him to inadvertently attack the kinds of studies done by people whose policy prescriptions he prefers, like Ludger Woessmann and Eric Hanushek, who have done well-regarded statistical analyses of survey data from OECD-PISA and other sources.  We have, by the way, a high regard for these researchers and relied on them in our own work.  So he retreats from his blanket condemnation of “best practices” study methods to exempt quantitative studies.  But, then, to my astonishment, he even announces that case studies are OK if they are “well-constructed.”  This is after directing what he takes to be withering fire at our case studies.  He mentions in particular Charles Glenn’s case studies, describing them as “well constructed,” but never explains what distinguishes “well-constructed” case studies from ours, which—apparently—are not.

So, in the end, all the methods we used meet with Jay Greene’s approval.  It is only our conclusions that are odious.  He is left with a very weak reed indeed to which he then clutches.  The problem with the best practices approach, he says at the end of his review, is that, “by avoiding variation in the dependent variable,” it prevents any scientific identification of causation.  What?  Our aim was to look at the top-performing countries to find out how they are doing it.  If we strip the highfalutin language from Greene’s assertion, he is saying that we cannot possibly figure out what is causing their top performance, because all or most of the factors we think might be causing it might be found in low-performing countries, too, and, if we haven’t looked at them, we have no way of knowing that.

But Jay Greene evidently did not read the introductory chapter of our book, in which we lay out our method, or the concluding chapters, in which we conduct the analysis promised in the first chapter.  The strategy we used was to compare the top performing countries to the United States.  What we found was that the top-performing countries, as different from one another as Finland and Shanghai, Canada and Japan, shared a set of principles that underlie their reform strategies with each other, but not with the United States, and the United States is pursuing a set of strategies bases on principles that are not found in the countries that are doing the best job of education their students.  Greene, you will note, failed to tell his readers that.

Why?  It is not because he does not like our methods.  His colleagues are using the same methods.  It is not because there is “no variation in our dependent variable.”  There is variation in our dependent variable; we are comparing countries in which student achievement (the dependent variable) is high, to one, the United States, in which it is mediocre.

It is because he does not like our results.  We found that the principles of school reform he has been advocating don’t work.  They are not being used in the countries with the top performance, and the country that has been most influenced by his message turns out to be a mediocre performer.  That is a very important finding.  And it is apparently a little difficult to take.

-Marc Tucker

Marc  Tucker is the President and Chief Executive Officer of the National Center on Education and the Economy.

NB: Jay Greene has responded to this response here.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College