Defending PISA

Andreas Schleicher is Head of the Indicators and Analysis Division of the Directorate for Education at OECD.

In an article in the Fall 2009 issue of Education Next, “The International PISA Test,” Mark Schneider argues that American states ought to think twice before participating in the PISA exam and that the policy advice offered in connection with PISA is not based on solid research.

If Mark Schneider has doubts about the usefulness of PISA, he should wonder whether the United States has, under his leadership, used PISA effectively. Among the G8 economies the U.S. assessed the second smallest number of students for PISA and collected the least contextual information, limiting the inferences that can be drawn for states and narrowing the usefulness of PISA for policy. While much of the industrialized world has opted to extend PISA towards interactive electronic tests, the U.S. stuck to paper-and-pencil versions. While Schneider rightly notes that longitudinal studies are needed to establish causality, countries like Australia, Canada or Denmark are already implementing them, keeping track of the students assessed in PISA to find out how their knowledge and skills shape their subsequent life opportunities.

In virtually every other federal nation, whether Canada or Mexico in North America; Belgium, Germany, Italy, Spain, Switzerland or the United Kingdom in Europe; or Australia in the Pacific, individual states, provinces, and regions have implemented PISA successfully, are using it effectively, and have found it useful for policy formation. They recognize that the yardstick for educational success is no longer improvement by state and national standards alone, but the best-performing education systems internationally. States, like the nations that now make up almost 90 percent of the world economy, are getting a number of benefits from PISA:

*            an internationally benchmarked assessment of the overall performance of their student populations in key school subjects

*            indicators of the proportion of their students who reach global standards of excellence as well as the proportion of their students failing to reach baseline performance standards

*            indicators of equity-related performance, in terms of the extent to which student and school performance is influenced by the socioeconomic context of students and schools

*            indicators of coherence in educational standards, in terms of the extent to which high standards are consistently achieved throughout the system

Schneider worries that international assessments, like any assessment, embody judgments about what should be measured and that such judgments may differ from those judgments being made at the school, state, or federal levels. While those countries that have studied this empirically have generally found PISA well aligned with their national goals and standards, much of the value of international assessments lies precisely in allowing states to see their own standards through the prism of the judgments that the principal industrialized countries make collectively as to what knowledge and skills matter for the success of individuals in a global economy. The decision to have PISA go beyond the reproduction of subject-matter content and examine to what extent students can extrapolate from what they have learned and apply their knowledge in novel situations was made deliberately by the governments of the principal industrialized nations, because that is what is at a premium in modern economies. No nation uses that perspective to replace its national curriculum, but many states and nations now use PISA to benchmark their own standards. When a nation discovers that its students are unable to do things that students in other countries can do, knowing whether this better success was because students in other countries learned those things in school or out of school is secondary only. The crucial question is, do our students need these things too, to be able to succeed? If the answer is yes, then it is worthwhile to have a serious look at state standards and assessments, to improve them in case these things are covered but not learned, or to include them if they are not covered.

Do international assessments provide causal evidence on what makes school systems succeed? No, they don’t and they do not claim to. But they can shed light on important features in which education systems show similarities and differences and, by making those features visible, they can help to ask the right questions. They can show what is possible in education, in terms of the quality and equity in outcomes achieved by the best-performing education systems, and they can foster better understanding of how different education systems address similar problems. They can also assist with gauging the pace of progress and help review the reality of education delivery at the frontline. Are the contextual data currently used for this perfect? Certainly not, and OECD nations therefore constantly review and refine them. The methods used for this are, however, far more robust than the ways in which Schneider’s organization is patching together data from U.S. states and international assessments to suggest to states that they can pass over a process of thorough international benchmarking.

And, yes, on the request of individual states and nations the OECD does carry out peer reviews for these nations that can result in policy recommendations. Several nations have found this a useful way to translate experiences from other nations into their own policy context. However, different from what Schneider claims, that is done through an entirely different organizational structure and governance arrangement within the OECD than the one that manages PISA and collects the data, and the results are collectively reviewed by OECD member nations before they are finalized and published.

The United States was the driving force behind the ideas that shaped PISA and, together with Japan, chaired the governing board of member countries that manages the program. The fact that the United States has lost much of its intellectual leadership in the field of comparative assessment may be because other countries have shown growing interest and are doing better, but may also reflect the fact that the U.S. has reduced comparative assessments to a stale horse race. When the U.S. had to withdraw key outcomes from PISA 2006 because of technical flaws by Schneider’s subcontractors, much of the rest of the world was simply bewildered.

Update: Mark Schneider has written a response to this blog entry, which can be found here.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College