Shut Bad Schools for Low Performance, But Don’t Draw Conclusions from Test Scores Alone

Editor’s note: This post is the second in an ongoing discussion between Fordham’s Mike Petrilli and the University of Arkansas’s Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? The first entry can be found here.

ednext-blog-may16-debate-petrilli-lockersThe prompt for this forum promised that we would explore “areas of agreement and disagreement.” I’m pleased, Jay (and not altogether surprised), to see that we share a lot of common ground. Let me start with that, then save what I see as our major dispute (what we can learn from reading and math scores) for another post.

I’m thrilled that you dismissed the extreme position of some libertarians, who argue that society should never override the choices of parents. You write:

I…do not mean to suggest that policy makers should never close a school or shutter a program in the face of parental demand. I’m just arguing that it should take a lot more than “bad” test scores to do that.

I agree entirely, and on both counts. First let me explain why “we” should, on rare occasions, close a school that some parents prefer. And second, let me discuss what else beyond “bad” test scores we might consider when doing so.

You and others have heard me argue ad nauseam that because education is both a public and a private good, it’s only fair that both parties to the deal have a say in whether that good is, well, good enough. We both abhor a system whereby a district monopoly assigns children to schools and parents must accept whatever is handed to them. But the flip side is that we should also reject chronically low-performing schools—those that don’t prepare their young charges for success academically or otherwise—and deem them undeserving of taxpayer support. They aren’t fulfilling their public duty to help create a well-educated and self-sufficient citizenry, which is what taxpayers are giving them money to do.

Furthermore, there are real financial and political costs to letting bad schools—including schools of choice—fester. We see this in many cities of the industrial Midwest (Detroit, Cleveland, and Dayton come to mind), where too many schools are chasing too few students. Perhaps the marketplace forces of “creative destruction” will eventually take hold and the weakest schools will disappear, allowing the remaining ones sufficient enrollment to ensure their financial sustainability and a higher level of program quality. But that process is taking an awfully long time, particularly when we’re talking about disadvantaged children who have no time to waste. The charter sectors in these cities would be stronger—academically, financially, and politically—if authorizers stepped in to close the worst schools. But some libertarians see that as paternalistic government intrusion. I think they are misguided; I hope that you agree.

Now to your second point, that “it should take a lot more than ‘bad’ test scores” to “close a school or shutter a program in the face of parental demand.” Hear, hear! This is the genius of effective charter school authorizers that look at a school’s big picture as well as its scores. Fordham’s Dayton office strives hard (and with fair success) to be that kind of authorizer. We certainly look at test scores—especially individual student progress over time (a.k.a. “value added”). But we also examine lots of other indicators of school quality, operational efficiency, and financial sustainability. (See our current accountability framework in the appendix here.) And most importantly, we know our schools intimately. We attend their board meetings, conduct site visits frequently, and get to know their teams.

So when we consider the painful step of closing a school (which we’ve had to do a handful of times), we’re hardly just sitting in our offices “looking at spreadsheets of test scores.” The same goes for other leading authorizers nationwide.

Not that it’s easy to identify measures beyond reading and math scores that are valid and reliable indicators of school success. I share your enthusiasm for character education, non-cognitive skills, high school graduation rates, and long-term outcomes such as college completion and labor market earnings. And I’d love to see states maintain regular testing in history, geography, science, and more. Whenever we can use those scores, we absolutely do. But as the early debate around the Every Student Succeeds Act illustrates, measures of character and non-cognitive skills don’t appear ready for prime time, and they may never be appropriate for high-stakes decisions. High school graduation rates, meanwhile, are perhaps the phoniest numbers in education. Long-term outcomes are just that—long-term, making them both difficult to tie to a school (especially an elementary or middle school) and not very helpful for making decisions in the here and now. And there’s no political appetite for more testing; if anything, everyone wants less. (Let me know if you disagree with my analysis.)

So where does that leave us? As far as I can tell, facing a trade-off, which is the normal state of affairs in public policy. We can either use reading and math gains as imperfect indicators of effectiveness while working to build better measures—buttressed by school visits and the like—or we can succumb to “analysis paralysis” and do nothing.

I know which one I prefer. What about you?

– Mike Petrilli

This first appeared on Flypaper

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College