Authorizing Matters

As EdNext readers know, Doug Harris’s New York Times critique of Betsy DeVos set off a round-robin of blogs and tweets pitting “choice purists” against “regulators,” with the performance of charter sectors in Detroit and New Orleans at issue.

Scott Fujita, a member of the New Orleans Saints, greets more than 200 students of Belle Chasse Academy during a tour of the school as a part of the National Football League's Campaign 60 program. The NFL's Campaign 60 program is intended to encourage youth to exercise for 60 minutes a day. Belle Chasse Academy is the only NASA charter school on a military installation. U.S. Navy photo by Mass Communication Specialist 1st Class Shawn D. Graham (RELEASED)
Scott Fujita greets more than 200 students of Belle Chasse Academy (2007).  U.S. Navy photo by Mass Communication Specialist 1st Class Shawn D. Graham

One aspect of Jay Greene’s initial volley hasn’t received the attention it deserves. Despite Greene’s claims to the contrary, Harris determined that an external, professional evaluation of charter school applications in New Orleans predicted important aspects of school quality.

A few clicks will take you through debates over whether tests matter to long-term outcomes; whether the performance of Detroit charters is weak enough to render moot the fact that they outperform district-run schools; and whether it’s fair to compare the two places at all. In joining this debate, Greene mischaracterizes generally positive findings by Harris’s Education Research Alliance for New Orleans (ERA) about the role of my organization, The National Association of Charter School Authorizers (NACSA), in managing the Louisiana Recovery School District’s (RSD) application processes between 2008 and 2013.

Here’s the accurate version: ERA sifted through possible inputs in looking at the quality and sustainability of charters approved by the RSD. They found that NACSA’s recommendations of which schools to approve and which schools to deny were one of very few elements that correlated convincingly with chances of renewal.

Greene arrives at his view of this work by an odd route. While he personally doesn’t think test scores predict outcomes, others do. So, he asks “whether regulators are any good at identifying which schools will contribute to test score gains” and then says this: “The bottom line is that none of the factors used by authorizers to open or renew charter schools in New Orleans were predictive of how much test score growth these schools could produce later on.”

Here’s how Harris characterizes the ERA findings on growth in a related brief: “None of the application measures predict the value-added performance of schools, though there are signs of a positive relationship between the NACSA ratings and value-added (emphasis added). It is not surprising that our statistical confidence is weak here because value-added measures are imprecise and the NACSA ratings did not vary much among approved applications.” In other words, it’s hard to detect correlation to specific outcomes when the approved applications all scored at high levels.

Far more important, NACSA’s ratings did clearly predict schools’ chances of being renewed at the end of their first charter term—and through a renewal process that relies on Louisiana’s test-based School Performance Score (SPS) measure. The study says: “…schools were more likely to be renewed if they had a higher [SPS] (a state-determined school accountability measure based on test score levels), higher school value-added, or a higher NACSA rating (emphasis added).”

Successful renewals also signal that schools have been good stewards of public funds and have sound management and operational practices. Part of NACSA’s rigorous approval process is to probe deeply—through analysis of applications, interviews, and hearings—to see whether a proposed operator has the capacity and experience to handle these essential tasks well. The renewal record shows that these processes worked. As Harris says in the brief: “The NACSA rating is also positively related to future years of renewal, signaling that NACSA is able to discern from the application process what kinds of charter schools the state wants to maintain.”

Of course, there is an obvious limitation of any study looking only at schools that were approved: you can compare outcomes for those that make it through the application gauntlet, but you can’t look at outcomes for schools that were turned down.

Nationally, and with remarkable consistency over the past several years, large authorizers (those with portfolios of 10 or more schools) have approved roughly one-third of the applications presented to them (which hardcore choice advocates think is horribly low and even anti-charter). We don’t know the quality of the other two-thirds—the schools that never start—but we do know something about why they don’t make it. As the Research Alliance study indicates, 42 of 114 New Orleans applications (36 percent) were approved during this period, based mainly on NACSA recommendations. On average, those denied drew low ratings on finance and governance plans, while doing only slightly better on academics. Critics of this process should think hard whether they would send their own kids to one of the schools that might have opened with such poor preparation.

In his zeal to lump charter authorizers in with other “regulators,” Greene makes some sweeping and unfounded claims. For example: “[M]ost regulators making decisions about what choice schools should be opened, expanded, or closed are not relying on rigorously identified gains in test scores—they just look primarily at the levels of test scores and call those with low scores bad.” This oversimplifies what actually happens at each of these decision points. NACSA surveys show that about 90 percent of large authorizers now use performance frameworks—either their own or those created by states—which almost always give weight to growth as well as absolute scores. Moreover, high-stakes renewal decisions are seldom made on the basis of test scores alone; they reflect financial and operational evidence, verification of compliance with the law, site visit reports, and reams of other quantitative and qualitative evidence.

So, as someone who has worked to expand quality choices for parents for the past 30 years, I have a request. Let’s stop the name-calling and litmus tests. Reasonable people should be able to agree that any form of parent choice that involves the use of public funds should have some prudent and thoughtful oversight. Within the authorizing community there is robust debate about how to do what’s needed while promoting continued growth and protecting charter autonomy. Let’s have that debate without forcing each other into opposing corners.

– Nelson Smith

Nelson Smith is Senior Advisor to the National Association of Charter School Authorizers.

 

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College