New American Schools; bullying and school violence

Summer 2002
Missing evidence?
At New American Schools, it has always been our practice to cooperate with—even to initiate—in-depth review that leads to improvement. That’s why it’s baffling to us why Jeffrey Mirel (“Unrequited Promise,” Research, Summer 2002) did not interview anyone at New American Schools (NAS) in the research for this study.

Too often the aim of this type of research is to make a splash rather than a credible contribution to the record. What, ultimately, is the point of criticizing NAS for being a mainstream education organization? All revolutionaries want their ideas and practices to become mainstream. Otherwise, what is the point of the revolution?

The fact that NAS has attained a reputation as an advocate for public school improvement and, in some ways, has changed over the years hardly seems worthy of criticism. In fact, it is a credit to the organization that it is willing to change, based on lessons learned from research and classroom experience.

Comprehensive school reform has been identified by both Democratic and Republican administrations and Congress as a key strategy in turning around the country’s lowest performing schools, but this fact does not make NAS just like any other education group in D.C. Instead, it means that after a great deal of review, comprehensive school reform emerged as one of the country’s best hopes for public school improvement on a grand scale.

Mirel criticizes the school reform models funded by NAS for having “progressive” roots. But who cares if Mirel defines them as progressive or traditional? We care about their performance. Do they work? We know, from our own research and that of others, that the NAS-affiliated school designs do work when implemented properly. They don’t always work, and we have reported on why that is so in a number of widely distributed evaluations conducted by the RAND Corporation.

Instead of an impressionistic study of the sort offered by Mirel, the kinds of reports that would be helpful to education professionals, and ultimately to students, include evaluations of designs that track individual student performance year to year; the percentage of students reaching local and state standards; a more widely disseminated study of design implementation so others can benefit from lessons learned; and the establishment of a district-wide roadmap for bringing comprehensive school improvement to fruition.

This is the type of research that New American Schools welcomes, has funded, and continues to perform. (For the full text of this letter, log on to www.educationnext.org.)

Mary Anne Schmitt
President and CEO
New American Schools
Washington, D.C.

Jeffrey Mirel claims that there is no research to support our America’s Choice School Design program, citing studies by the RAND Corporation and the American Institutes for Research (AIR). Both the RAND and AIR reports were done years ago. At the time they were released, the America’s Choice program was less than a year old. That is why there was no research on it.

The America’s Choice School Design network is now completing its fourth year in the field. When we started the program, we committed ourselves to making sure that the program would receive a rigorous, independent evaluation from a highly regarded research outfit. We gave the Consortium for Policy Research in Education a sizable contract to conduct both formative and summative evaluation work on America’s Choice. Its first report, issued last year, was titled, “Moving Mountains.” The title is an indication of the nature of the findings. The researchers were surprised at our progress both in implementing the program in the field and in producing such solid evidence of student achievement gains in the course of just one year.

No one should have been surprised that the AIR study showed no results for virtually all of the programs that had been in place for less than seven years. Just do the arithmetic. First, you have to design the program. Then you have to put it in place and work out the kinks. Then you have to perform a formal multiyear field test. Then you have to write it up and get it published. This process normally takes many years.

Ours had been in place for less than one year. Now we have what we and others regard as pretty impressive data, which have been publicly available for a year. The AIR study results were published in 1999 and based on data no more recent than 1998. That is four years ago. It is very clear that Jeff Mirel made no effort whatsoever to get data more recent than those on any of these programs. That is not responsible scholarship.

Marc Tucker
National Center on Education
and the Economy
Washington, D.C.

Jeffrey Mirel responds: Mary Anne Schmitt asks two important questions: Does it matter whether the NAS reforms are revolutionary or mainstream? Does it matter if the curricula of the NAS designs are progressive or traditional? She argues rightly that such distinctions do not matter in themselves. What matters is whether they work. But there’s the rub. The founders of NAS sought to create a revolutionary education organization that would break with the “failed policies” and tired ideas of the education establishment. But members of the education establishment created most of the designs that NAS funded. Moreover, most of the winning designs were inspired by the progressive education movement. The problem with progressivism, as scholars such as Jeanne Chall and others have argued, is that reforms guided by its tenets generally have had a poor track record in improving the academic achievement of disadvantaged and minority children—precisely the children most NAS designs are serving. The RAND studies commissioned by NAS, which provide the most thorough and reliable analysis of the NAS designs, bear that out. Schmitt is right: what matters is what works.

Tucker makes a different point. He criticizes me for relying on old data, notably the 1999 AIR study of various whole-school reform designs. However, in doing a historical analysis of NAS these data are quite appropriate because of what they reveal about how NAS further entrenched itself within the education mainstream.

The passage of the 1997 Obey-Porter amendment, which provided millions of federal dollars to support whole-school reform, was a pivotal development in the transformation of NAS from a revolutionary upstart to an important player inside the Beltway. As I point out, supporters of Obey-Porter consistently highlighted NAS designs as models of what they hoped the amendment would fund. Moreover, supporters of the amendment described whole-school reform as a strategy that had been “proven effective.” However, at that time there was no substantial evidence on the effectiveness of most of the NAS designs. The AIR study that Tucker criticizes me for citing corroborated that point. As he notes, the AIR study was based on 1998 data, which seems perfectly appropriate for arguing that Obey-Porter jumped the gun.

Vouchers on Trial

Missionary zeal

Joseph Viteritti (“Vouchers on Trial,” Feature, Summer 2002) writes that many supporters see vouchers as “a fulfillment of the promise articulated in Brown v. Board of Education: to make education available to all ‘on equal terms.’ ” But supporters’ fondness for using the language of civil rights cannot obscure a broader agenda to defund and, eventually, destroy public education. Earlier this year Joseph Bast, president of the pro-voucher Heartland Institute, wrote: “Pilot voucher programs for the urban poor will lead the way to statewide universal voucher plans. Soon, most government schools will be converted into private schools or simply close their doors.”

In its Zelman brief, the NAACP—the organization that waged the legal battle culminating in Brown—refuted the notion that vouchers level the playing field for students. The NAACP noted that Ohio’s operation of its education system “is inescapably inadequate to deliver on Brown‘s promise.” Indeed, the voucher program serves only a handful of students, most of whom have never attended a public school. Vouchers offer nothing to the 76,000 students who attend Cleveland public schools.

In fact, by draining critical funds from public schools, the voucher program obstructs Brown‘s mandate for states to educate all “on equal terms.” In the voucher program’s first five years, more than $27 million that could have gone toward reduction of class size or other reforms for the 76,000 children who attend Cleveland’s public schools was instead diverted to vouchers.

Viteritti states that vouchers show “there is an alternative to failing inner-city schools.” Yet it never occurs to him that a sensible alternative to failing public schools is improvement, not abandonment. While voucher proponents claim that public schools won’t improve without competition from vouchers, the evidence shows otherwise. Last year, public school districts in Los Angeles, Baltimore, Dallas, Minneapolis, Seattle, and several other cities raised both their reading and math scores in every grade tested—and each of these districts did so without the competition of state-funded vouchers.

Elliot Mincberg
People For the American Way Foundation
Washington, D.C.

Joseph Viteritti responds: The claim that choice depletes public school resources is false. In Cleveland, children who accept a voucher get only $2,250 in government funding; those in public schools receive $7,746, the highest of any district in Ohio. Public schools in Cleveland actually have more money per pupil as a result of school vouchers, because they keep money not used to pay for the voucher. If anyone is financially short-changed, it is the poor family who is exercising choice with the hope of getting a decent education for their child.

Like most school choice supporters, I support public schools whenever they work effectively. Thus, I am cheered by evidence of progress in some urban school districts and continue to support reforms that result in their better academic performance. Unfortunately, in American cities, educational excellence is the exception. Until it becomes the rule, it is only fair to allow poor students to escape inner-city schools most middle-class parents refuse to consider for their own children.

Bipartisan Schoolmates

Ready to enforce

While Siobhan Gorman (“Bipartisan Schoolmates,” Feature, Summer 2002) can be faulted for underestimating the degree to which President Bush’s education policy represents major change, she is correct to point out the obvious: Public policy matters only if it is enforced. The No Child Left Behind Act does indeed have the potential to change education in America by ushering in meaningful accountability, along with greater opportunity and choices for parents and broader flexibility for state and local decisionmakers. However, that potential will be realized only through a comprehensive and robust implementation of the law. Gorman is correct in writing, “The federal government sports a sobering track record when it comes to enforcing its education reforms.” When President Bush entered office, only 16 states were fully in compliance with the 1994 reauthorization of the Elementary and Secondary Education Act. But the landscape of education politics has changed dramatically since then.

The most obvious change is the bipartisan nature of the consensus around education reform and President Bush’s proposals. And the consensus is spilling over into the implementation of the new law as well. The education committee leaders in both the House and the Senate have repeatedly expressed a desire to be kept abreast of implementation, and they have consistently stated their intention to make sure state and local education officials do what the new law requires of them. While the more typical partisan bickering over the budget has returned, a resolute attitude on implementation remains. This can only help those of us charged with making sure No Child Left Behind becomes a reality. It makes a difference when the federal Department of Education tells a state or school district what it must do. But it sometimes makes a bigger difference when one’s own congressional delegation echoes the message.

In the months ahead, there will come those moments that will test just how serious the federal government is about making sure No Child Left Behind is the law of the land. Sadly, as Gorman points out, there are those who care less about educating children and more about getting by without exerting too much effort. But the message coming from the president and Congress has been loud and clear, and the need for reform and improvement is obvious and urgent.

Eugene Hickok
Undersecretary of Education
Washington, D.C.

Monster Hype

School violence

Joel Best (“Monster Hype,” Feature, Summer 2002) suggests that our study overstates the prevalence of bullying in America’s schools.

In particular, Best criticizes our choice to report that slightly less than a third of students were involved in bullying. But had we reported figures for bullies and victims separately, we would have had to say that 17 percent were victims of bullying and 19 percent were bullies. Confused readers might have added these numbers and said that 36 percent of students were involved in bullying, not understanding that 6 percent of the youth were both bullies and victims. Reporting the figures separately, as Best urges us to do, would have overstated the bullying problem.

We certainly never stated that 30 percent of youth were “subject” to bullying—rather that 30 percent were involved in bullying as bully, victim, or both. Addressing both bullies and victims (as opposed to only victims) is important because previous research has indicated that bullying is a problem not only for the victims, but for the bullies as well, with former bullies demonstrating significantly increased rates of criminal behavior in adulthood.

Tonja R. Nansel
National Institute of Child Health & Human Development
Bethesda, Maryland

Joel Best’s article was a great commentary on violence and especially bullying. What amazes me is that so many of the academics who compose these surveys actually believe the teens who fill in the circles. Have they no teenagers of their own? I remember my own teens (especially the boys) coming home from school, after they had been forced to sit in the gym and fill out these questionnaires, and laughing with their friends about all the nutty answers they gave. (“Have you ever been bullied? If so, how many times?” Answer: “Every day I get mean looks and sexual overtures.”) Given the age of the children involved, these surveys are a total waste of time and money.

Ginny Yanyar
Providence, Rhode Island

Battle tested

Dale Ballou’s “Sizing Up Test Scores” (Forum, Summer 2002) discusses a variety of limitations and uncertainties regarding value-added assessment. However, no reference is made to the published reports that have addressed these issues.

The reader is left with the impression that unexamined shortcomings and questions severely challenge value-added assessment’s positive reputation among educators and policymakers. In truth, if the issues were as unsettled as this critique implies, attempting to judge teacher effectiveness on the basis of improved student achievement would have to be considered a dubious enterprise.

In fact these criticisms are based principally on theory, not evident deficiencies, and most have been considered by other scholars. For example, the article notes the well-known problem of error in test scores and its contribution to “noise” in gain scores. However, instead of reporting how value-added assessment has minimized this problem, the author worries that most educators won’t understand the solution.

In a related matter, the article argues that Tennessee’s value-added data show that most teachers are within an average range of effectiveness—particularly in subjects like reading. Instead of allowing for the possibility that most teachers use similar methods and are, therefore, similarly effective, he assumes that they differ substantially and that value-added assessment fails to detect those differences.

But the dominance of teaching methods such as “whole language” reading instruction may be exactly why education is so hard to improve. Identifying and rewarding the few exceptions may be the key to improvement.

Ballou worries that critics will fault value-added scores on the ground that they may not fully exclude the effects of circumstances over which schools have no control. He concedes, however, that his own research shows that race, gender, and socioeconomic status have little effect on value-added measures of teacher effectiveness. The evidence should allay his fears, yet he continues to suspect a flaw.

Compared with the assessments used by teacher licensure agencies, value-added assessment is an important advancement since it is clearly linked to student achievement. Value-added assessment has been subjected to more independent and critical review than these institutionally accepted alternatives. Moreover, it is a direct measure of what policymakers want in teacher quality, not a proxy for student achievement fashioned by education’s internal stakeholders.

There is no question that value-added assessment should be subject to continuing scholarly review. What is questionable, however, is the suggestion that value-added assessment is novel, unexamined, and so fraught with uncertainties and limitations that it should not be used in personnel decisions. Fairness to teachers is an important aspect of accountability, but it must not be treated as the ultimate criterion.

J. E. Stone
East Tennessee State University
Johnson City, Tennessee

Niche market

In “Cooking the Questions” (Check the Facts, Spring 2002), Terry Moe criticizes Phi Delta Kappa and the Gallup Organization for using survey questions that are biased against vouchers. But even if the PDK/Gallup finding that only 34 percent of Americans support vouchers were correct, who cares? This would still mean that millions of families want an alternative to the public schools. If “only” 34 percent of Americans wanted to eat at restaurants outside of their neighborhoods, we wouldn’t cite opinion polls as being enough reason to stop them.

Casey J. Lartigue Jr.
Cato Institute
Washington, D.C.

Dramatic effect

Regarding Andrew Rotherham’s “A New Partnership” (Forum, Spring 2002): the bold print over Figure 1 reinforces the visual message that the scores of Massachusetts 10th graders “changed dramatically” once that state’s test, the Massachusetts Comprehensive Assessment System (MCAS), acquired “high stakes.” The reality, according to the figures on the graph, is that the average English score increased by 4 percent (from 229 to 239), the math scores by less than that (from 228 to 237). Hardly a dramatic change.

Francis Schrag
University of Wisconsin
Madison, Wisconsin

The editors respond: The lowest possible scaled score on the MCAS exam in English or math is 200, the highest just 280. While some of the gain reported in the graph was influenced by changes in scaling procedures, even when corrections are introduced that take into account these changes, the size of the improvement in the average English score between 2000 and 2001 was 7 to 8 percent, not 4 percent. And the corrected math gain was even larger. One may also calculate the gain by examining the percentage of students passing the exam, a calculation unaffected by changes in scaling procedures. The share passing the English exam jumped from 66 percent to 82 percent; in math, the pass rate leaped from 55 percent to 75 percent. Since the passing percentage does not reveal what happened to all students, the graph reports average scores for the two years, a more conservative presentation of the gains than one based on passing rates.

But no matter how the results are presented, the one-year change was indeed dramatic. Boston College’s testing critic, Walter Haney, commented that these sorts of one-year changes struck him as “too large to be attributed to learning gains,” leading him to search for alternative explanations. The debate in Massachusetts has been over the source of the gains, not their size.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College