The Test of Time

Education Next Issue Cover

A Nation at Risk was an historic document—for its time. Now we know that while its findings were dead on, its reform agenda relied too much on the existing system



By DIANE RAVITCH

1 Comment | Print | PDF |

Spring 2003 / Vol. 3, No. 2

Illustration by Craig Frazier.


With the perspective of two decades, it is now apparent that A Nation at Risk was the most important education reform document of the 20th century. It captured the attention not only of educators and political and business leaders, but also of the general public, thus shaping the terms of the debate about schooling for a generation after its publication.

Though unique for its relative fame and influence, Risk actually followed a long tradition of “reform by commission” among American educators. Over the past century, whenever it seemed important to rouse the public or fellow members of their profession to a particular course of action, educators formed a commission, staffed it with high-powered members, and produced some sort of consensus platform. The first such commission-famously known as the Committee of Ten-released its recommendations in 1893. Over the next two decades, there would also be a Committee of Five, a Committee of Seven, and a Committee of Fifteen, but none was as renowned as the Ten. The Ten addressed the question of whether there should be one set of curricula for college-bound students, another for the great majority who did not intend to go to college. At the time, only a tiny proportion of youth ever attended high school or prepared for college. Nonetheless, the Ten called for strong academic preparation for all, on the grounds that this was the best preparation for life regardless of one’s future occupation.

Subsequent commissions were often formed in direct opposition to the Ten’s recommendations. For instance, the most significant report of the early 1900s, known as the Cardinal Principles of Secondary Education (released in 1918), declared that academic studies should share equal status with instruction in health, vocation, “worthy home-membership,” citizenship, character, and “worthy use of leisure.”

Over the decades, however, such reports gained less and less attention as more and more of them poured forth, keyed to the latest crisis in school or society. The fact that so many of them were written by education professionals for other education professionals often made them incomprehensible to the larger public.

In Risk, by contrast, the public found a report that was written in plain English. Here was a message that noneducators understood. The public’s powerful response signaled that Risk had spoken to deeply held concerns; its calls for higher expectations and higher standards had clearly struck a chord. It reached far beyond the professionals and energized reforms that 20 years later have still not run their course.

The early 1980s presented, in addition to the economic crisis, a heap of public discontent about schooling that had been accumulating since the 1960s. Photograph from AP/World Wide Photos.


Making a Splash

The roots of Risk began in an effort to salvage the U.S. Department of Education. During his first presidential campaign, Ronald Reagan had promised to abolish the department, which had been created in the closing months of the Carter administration. Reagan believed that the department would inevitably expand the reach of the federal government into issues that he thought should be left to state and local officials. However, Reagan’s secretary of education, Terrel Bell, didn’t agree with Reagan’s plan (nor did Reagan have the votes in Congress to get rid of the department). In his effort to demonstrate the power of the bully pulpit, Secretary Bell asked Reagan to appoint an independent commission to study the condition of American education. When the president declined to do so, Secretary Bell created the National Commission on Excellence in Education as a cabinet-level operation. The favorable attention accorded the commission’s report, which was released in April 1983, ended the debate about abolishing the department, guaranteeing its political survival.

The commission included several eminent educators: its chairman, David P. Gardner, president of the University of Utah and soon-to-be president of the University of California; Nobel laureate Glenn T. Seaborg of the University of California; Gerald Holton of Harvard University; and A. Bartlett Giamatti, president of Yale University. In its most memorable phrase, the commission warned of the American education system’s “being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people.” The commission maintained, “If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. . . . We have, in effect, been committing an act of unthinking, unilateral educational disarmament.”

The commission argued that the nation’s future prosperity was being imperiled by recent declines in student achievement. In the industrial era, it held, an educated elite was sufficient, but in the emerging “information age,” knowledge, learning, and “skilled intelligence” were necessary for all. The commission supported its claims with data on achievement drawn from national and international sources, including the SAT, the National Assessment of Educational Progress, the College Board achievement tests, and international exams. If achievement continued to decline, it implied, other nations would overtake the American economy and leave us behind.

At the time, a few critics from the academic world derided the report’s gloomy diagnosis and war-soaked imagery. They questioned the commission’s use of data and its assertions about grade inflation and low standards (although they did not object to its recommendations for more spending). A decade later, they regarded the booming American economy and Japan’s long recession as proof that Risk had been wrong. How, after all, could the economy be so successful if it relied on workers who, as Risk had alleged, were poorly educated? Either no relationship existed between the quality of a nation’s education system and its economic success, or America’s K-12 schools were in fact good enough to produce a robust economy.

In retrospect, it seems clear that the report’s attempt to draw a straight line between the quality of the schools and the health of the economy was on shaky ground. It would be ridiculous to claim that a nation’s economic well-being is unaffected by the quality of education available to its citizens. But the connections are not as clear-cut as Risk asserted, and there are many other factors, such as the immigration of educated workers, opportunities for remediation, out-of-school learning opportunities, and the abundance of postsecondary institutions that can compensate for the failings of the formal K-12 system.

The rhetoric of education-as-panacea was continued during the Depression, World War II, the atomic age, and in the wake of the Soviets’ launch of Sputnik. With new funds and pedagogical changes, educators promised, the schools would solve the crisis of the day. Photograph from Hulton Archives.


Historical Context

Risk was uniquely a document of the early 1980s. The American economy was in recession, while the economies of Japan and several other Asian nations were booming. Since the early 20th century, educators had developed a long tradition of hitching education to whatever issue was the foremost national concern when in search of new funds or programs. The formula went like this: Whatever the crisis, new education programs would solve it. In the early decades of the century, reformers insisted that the schools needed more vocational and industrial programs to meet the needs of American industry. The rhetoric of education-as-panacea was continued during the Depression, World War II, the atomic age, and in the wake of the Soviets’ launch of Sputnik. With new funds and pedagogical changes, educators promised, the schools would solve the crisis of the day.

The early 1980s presented, in addition to the economic crisis, a heap of public discontent about schooling that had been accumulating since the 1960s. At the time, numerous journalistic accounts were telling of schools’ abandoning many academic requirements, replacing them with frivolous, fluffy electives, like cooking for singles. Parents worried whether students were learning basic skills, especially when they saw in the newspaper allegations that high-school graduates couldn’t read their own diplomas. Many state legislatures responded by mandating “minimum competency” tests to ensure that students were able to read, write, and figure.

The revolution in the schools that led to the abandonment of many academic requirements began during the late 1960s and the early 1970s, as radical critics hammered away at the public school system for whatever faults they discerned in American society. Jonathan Kozol’s award-winning Death at an Early Age portrayed the Boston public schools as havens for sadistic, racist teachers; other critics claimed that teachers were rigid, insensitive, hostile to children, and ignorant about pedagogical innovation, among other things. The radical critics held that planned curricula, testing, textbooks, homework, and the other practices associated with traditional schooling were instruments of oppression. Their goal was child liberation, the creation of permissive environments in which there was no authority, in which children learned because they wanted to and studied what interested them most. In one of the more temperate tracts of that era, Charles Silberman insisted that the public schools were not really malign, just mindless.

Under attack from the left, educators sought to reinvent traditional schooling, trying innovations such as open education, schools “without walls,” curricula relevant to student interests, and student-designed curricula. Schools of education embraced these innovations and identified themselves with the radical attacks on traditional teacher-led schooling and public education. The ferment excited those pedagogical leaders who agreed with its direction, but it was disheartening for those teachers and parents who wanted schools and classrooms where the adults were in charge. It also played havoc with curriculum, standards, grades, and other traditional elements of schooling.

By the mid-1970s, after nearly a decade of fevered change, troubling reports had begun to emerge. In 1975 the New York Times reported that scores on the Scholastic Aptitude Test had been falling since the mid-1960s. The shock value of this information cannot be overestimated. Not only were average scores falling on both the verbal and mathematical tests, but the percentage of students scoring at high levels (over 600 and over 700) had also fallen sharply.

In response to this disheartening news, the College Board, which was responsible for the SAT, created a blue-ribbon commission to examine the causes of the decline in scores. The commission’s 1977 report, On Further Examination, was virtually a rehearsal for Risk. The panel, headed by Willard Wirtz, former secretary of labor, and including prominent educators like Harold Howe II, Ralph W. Tyler, Benjamin Bloom, and Robert L. Thorndike, concluded that most of the initial score decline, from 1963 to 1970, had been caused by changes in the composition of the pool of test-takers-that is, by increases in the number of low-scoring students who took the college-entry test. However, after 1970, scores fell even faster than before, and little of that decline was caused by the changing demography of the test-taking population (for an examination of the decline in SAT scores, see Paul Peterson, “Ticket to Nowhere,” p. 39). Most of the post-1970 decline was the result of what the panel called “pervasive changes” in schools and society.

Of the school-based culprits, the panel regarded as most significant the fact that students were taking fewer basic academic courses and more nonacademic electives; studies from Massachusetts showed that schools had been adding such courses as “Film Making,” even as course offerings in 11th-grade English and world history were being eliminated. The panel also pointed out that “less thoughtful and critical reading is now being demanded and done” and that “careful writing has apparently about gone out of style.” The panel cast blame on absenteeism, social promotion, less homework being assigned, and a general lowering of standards. Coming as they did from a blue-ribbon commission with impeccable educationist credentials, these charges set the stage for Risk only six years later.

In the late 1970s, no one suggested that criticism of the education system was motivated by partisanship or that it emanated from “enemies of the public schools.” In the closing years of the Carter administration, two presidentially appointed commissions lamented the flawed teaching of specific subject areas. In 1979 a commission created to examine foreign-language instruction in the United States concluded that “Americans’ incompetence in foreign languages is nothing short of scandalous, and it is becoming worse.” High-school enrollments in foreign-language study, it pointed out, had fallen from 24 percent of each grade in 1965 to 15 percent in the late 1970s. Only 1 of every 20 high-school students ever studied a second year of a foreign language. Colleges had ceased to require foreign-language study for admission, in response to campus revolts against requirements in the late 1960s, and high-school students had stopped taking foreign languages once it was no longer necessary for college admission. In 1980 another Carter-appointed commission lamented the condition of education in mathematics, science, and engineering; it pointed to lower standards in the schools and to the weakening of college-entrance requirements as causes.

The revolution in the schools began during the late 1960s and early 1970s, as radical critics hammered away at the public school system for whatever faults they discerned in American society. Photograph from AP/World Wide Photos.


High Expectations

These earlier studies and critiques by highly respectable, nonpartisan agencies paved the way for Risk. When the National Commission on Excellence in Education began its deliberations in 1981, the public was already reacting against the pedagogical faddism and extremism of the 1970s. Schools that had torn down the walls between classrooms were rebuilding them; schools that had been built without walls were installing them. A noisy “back to basics” movement prompted several state legislatures to adopt new testing requirements for high-school graduation.

As it set to work, the National Commission solicited papers from educators. One of the most influential was Clifford Adelman’s study of high-school transcripts from 1964 to 1981. Adelman, a researcher at the U.S. Department of Education, concluded that during this period there had been a “systematic devaluation of academic (and some vocational) courses.” Students were spending less time in academic study and more time in nonacademic courses for which they received credit toward graduation. The typical high-school curriculum was divided into three tracks: academic, vocational, and general. As graduation requirements diminished, enrollment in the general track-which was neither academic nor vocational-jumped from 12 percent in the late 1960s to 42 percent by the late 1970s. Consisting of courses like driver education, general shop, business math, remedial studies, consumer education, and home economics, the general track had become the dominant “program” in American high schools.

One of the most important notions advanced by Risk was that schools should have high expectations for all children and should expect them to complete a reasonably demanding academic curriculum. This was a radical message. In the checkered history of reform-by-commission, only the 1893 Committee of Ten had made a similarly egalitarian claim on behalf of the intellectual capacity of all children. Ninety years later, Risk stated: “All, regardless of race or class or economic status, are entitled to a fair chance and to the tools for developing their individual powers of mind and spirit to the utmost.”

Among educators, this message was translated to mean, “All children can learn.” This earnest maxim repudiated the long-established practice of separating children into different programs on the basis of their likelihood of going to college. “All children can learn” changed the rules of the game in American education; it shifted the debate from discussions about access and resources to discussion about results. It was no longer enough to provide equal facilities; it became necessary to justify programs and expenditures on the basis of whether students made genuine gains. The rhetoric and philosophy of “all children can learn” had a large impact on education issues, as it became increasingly clear that educators needed not only to set higher expectations, but also to devise methods and incentives to get almost all students to learn more and to exert greater effort. After Risk, every state and school district scrutinized its standards and curricula, changed high-school graduation requirements, and insisted that students take more courses in academic subjects.

If a report may be said to have an Achilles’ heel, then Risk‘s was its thesis of educational decline. Critics could rightly charge that the report had waxed nostalgic about an imaginary golden age. They could then blast this image with counterclaims that the schools were as good as ever, that any decline was a blip, and that a golden age never existed. We now know that the drop in test scores, which was real, actually ended about the time that Risk was released. But to argue about whether there was a golden age or whether the schools were better in 1983 than in 1973 or 1963 or 1953 is pointless. The issues are the same as they were a half-century ago, when Arthur Bestor wrote in Educational Wastelands:

If we are to have improvement, we must learn to make comparisons, not with the wretchedly inadequate public schools of earlier generations, but with the very best schools, public or private, American or foreign, past or present, of which we can obtain any knowledge. . . . If some other nation designs a better military plane, our aeronautical engineers do not point smugly to the fact that our own aircraft are better than they were in 1920 or 1930 or 1940.

The challenge before us is, as it has always been, to secure equal educational opportunity. Every American child should have the same opportunities for an excellent education. All should have the same chance to maximize their potential, to contribute to the common good, and to live a full and rewarding life. The real issue always has been whether the schools are good enough to prepare students for the challenges that will confront them. For the schools of 2003 to be better than the schools of 1983 is no great feat. For the schools of 2003 to be among the best in the world is what matters most for students today and for the future of American society.

-Diane Ravitch is a research professor at New York University and a visiting fellow at the Hoover Institution, Stanford University.




Comment on this article
  • [...] and demands for students to spend more time in school. Reform-driven policy elites have spun the U.S.’s declining global economic competitiveness into a time-in-school [...]

  • Comment on this Article

    Name ()


    *

         1 Comment
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors