Update on the Milwaukee School Choice Evaluation Dust-Up

By Guest Blogger 04/03/2013

11 Comments | Print | NO PDF |

My post of April 1 criticizing Diane Ravitch has raised quite a stir.  In that post and in this one, I defend and explain the work of my research team but I want to be clear that, in doing so, I speak only for myself.

To briefly review, I admonished Ravitch for repeating inaccurate facts regarding my team’s school voucher evaluations, relying on secondary sources for her information, and mischaracterizing our scientific research methodologies, which she apparently does not understand.  Kevin Welner of the National Education Policy Center (NEPC) has been especially forceful in objecting to my post in text posted on Ravitch’s blog.  Here I respond to his charges.

First, Welner argues that I owe Ravitch and NEPC an apology because the initial version of our Milwaukee Parental Choice Program (MPCP) educational attainment study was the source of one of Ravitch’s factual errors, and our error was merely repeated by the person NEPC hired to review our study.  Since Ravitch used that review to source her claim, she (and NEPC) are not responsible for the mistake.

Specifically, we are discussing the claim that 75% of the students who started in the voucher program in 9th grade were not in the program four years later.  That was an error in the initial draft of our report which Welner points out was quickly corrected to 56% in a second and final version of the report identified as “Updated and Corrected”.  Welner claims that the initial version, with the incorrect figure, was the one sent to their reviewer of our study, Casey Cobb, and that Nobody had thought to go back and see whether Wolf or his colleagues had changed important numbers in the SCDP report.”

Welner is obviously mistaken on that last point.  Someone did think to go back and access the updated report.  Casey Cobb did.  We know this because, after mentioning the incorrect 75% figure in his executive summary and page 2 of his review, on page 4 Cobb writes:

“Notably, more than half the students (56%) in the MPCP 9th grade sample were not in the MPCP four years later.”

Cobb could only have gotten the correct, 56%, figure from the updated and corrected report, which means that he knew that the 75% figure was outdated and incorrect but he mentioned that number as well, even though it clearly conflicted with the 56% figure.  People make mistakes.  We made a mistake in the form of the initial 75% program attrition figure.  Welner made a mistake in claiming with certainty that “Nobody had thought to go back and see” whether our report had been updated.  Cobb made a mistake in failing to delete the incorrect program attrition figure from his review after he had taken the correct 56% figure from the “Updated and Corrected” version of our report.  And Welner and his colleagues made a further mistake in not catching the inconsistency between the 75% and 56% figures in Cobb’s review, before they published and publicized it.  The big question is whether people correct their mistakes after they recognize them.  We did because that’s what scholars do.  I expect that the NEPC will issue an “Updated and Corrected” version of Cobb’s review promptly.

While Casey Cobb is correcting his review of our report, he should also revise his charge on page 4 that, “Curiously, it [meaning the report] fails to state how many program-switchers there were, when they switched and in which direction, and how many graduated.”  True, we did not provide those details in the report, but we referred readers to yet another publication of ours that does.  It is even called “Going Public:  Who Leaves a Large, Longstanding, and Widely Available Urban Voucher Program?” It was published in the prestigious American Educational Research Journal, the flagship journal of the American Education Research Association, more than a year ago.  Its mere existence definitively refutes Diane Ravitch’s charge that “Nobody knows” what happened to the students in our study who left the voucher program.  Not only do we know, we published an entire article about it that she and her colleagues really should read.

In a sense, the dust-up over the “75% versus 56%” number and the false charge that nobody knows what happened to students who left the MPCP during our study was both avoidable and immaterial.  Obviously it could have been avoided if we hadn’t initially reported the incorrect percentage of attriters.  It also could have been avoided if Diane Ravitch had actually read our updated report or, better yet, our , before issuing the charge in her March 29 blog post.  Instead, it is obvious that she relied solely on Cobb’s review and never read our report before criticizing it.  My original point was that this is not something that serious scholars do.

The difference between the 75% and 56% figure is largely immaterial because our “intention-to-treat” analysis exclusively measures the effect of starting high school in the voucher program on future levels of educational attainment regardless of how long you stayed in the program.  Okay, let’s all say this together, “Program attrition has no effect on the internal validity of intention-to-treat analyses of program effects.”  None.  Period.  Anyone who doesn’t accept that doesn’t understand the basics of program evaluation and shouldn’t be discussing studies that employ such scientific methodologies.

So, these are the facts:  First, 56%, and not 75%, of MPCP 9th graders left the program before the end of 12th grade.  Even in the face of substantial program attrition, students who were in the MPCP in 9th grade in 2006 graduated from high school, enrolled in college, and persisted in college at rates higher than similar students in Milwaukee Public Schools (MPS).  Third, at the end of the study, students who started the study in the MPCP had higher reading scores than comparable MPS students.  Fourth, the researchers carefully tracked the students who left the Milwaukee voucher program and even published an article in the top education journal about it.  Unfortunately, I worry that some people are determined to avoid acknowledging these facts.

-Patrick Wolf

Comment on this article
  • Dr. Zaeus says:

    Can’t wait to see their response. They’ll need something new to try and draw attention away from the actual empirical results.

  • CN says:

    100 people arrive at the ER to alleviate a migraine. While waiting, the migraines of 56 people mollified. Most received little to no treatment from hospital staff yet, obviously, the hospital should be given credit for each cure since the its staff intended to treat all 100 people. The result is irrefutable.

  • Kevin Welner says:

    My response, including a response from Casey Cobb, is posted on Diane Ravitch’s blog: http://dianeravitch.net/2013/04/04/kevin-welner-responds-again-to-patrick-wolf/

  • John says:

    CN with the win. Exposure to the word “voucher” has been proven to improve academic outcomes of children.

  • […] third issue involves what ITT use allows Wolf to conceal about his study.  It is all too convenient to note that neither a 75% nor a 56% attrition rate matters if one uses ITT. Even the lower of the two […]

  • JB says:

    Mercedes Schneider wrote a couple of blog posts on Wolf. She apparently has a PhD in applied statistics but was appallingly ignorant of basic research practices: her first blog post said that instead of intent-to-treat analysis Wolf should have looked only at “completers,” meaning kids who stuck with the same sector for all 4 years. When I disagreed with her in the comments, she suggested that “intent-to-treat” analysis was only for medical research. She now has a blog post purporting to explain intent-to-treat analysis, although she still seems confused about whether it is applicable outside of randomized trials, and she moreover insinuates that Wolf is unethical in failing to study the kids who left the voucher program (even after he has pointed out above that there is an entire scholarly article devoted to just that).

    What’s sad is that because Schneider appears to the average person to be an expert (as does Chris Lubienski and Julian Vasquez Heilig, both of whom liked her original post decrying intent-to-treat analysis), the typical Ravitch follower could easily be misled into thinking that actual experts have disagreed with Wolf.

  • Jack says:


    “There are questions that have STILL not been answered:

    “ ‘What was the cause of the initial incorrect 75% figure?’

    “ ‘What mistake (s) was (were) made that led to initially inaccurate 75% being published?’

    “ ‘What new information or re-calculation led to 75% figure being altered to a lower figure, that is conveniently more favorable to those who favor and promote vouchers?’

    “ ‘Why won’t those responsible present the answers to these questions, and allow independent analysts access to all the data and methodology used so that those analysts can verify whether or not those who produced the study are telling the truth NOW, and acted in an ethical manner THEN when they made the change from 75 – to – 56%?’

    “You proffered the following lame-o excuse for the 75 – to – 56% alteration on Schneider’s blog:

    J.B.: “I have no idea about the alteration. Seems odd, but typos do happen. ”

    “Seriously, J.B., after stretch like that, you’re going to need muscle relaxants for at least a week.

    “With billions—if not trillions—of dollars at stake that would result from the proliferation of vouchers, don’t you believe that that this report was proofread, and re-proof-read, then checked and re-checked before it was initially released? (or that with so much at stake, that it SHOULD have been.)


    “Please! Just “a typo”?

  • Patrick Wolf says:


    Take a chill pill. The 75% figure was an initial placeholder statistic until we completed all of our student-search activities to determine where the students were enrolled in the final year of high school. We initially neglected to update it when all the data were in, but did so quickly after realizing our mistake.

    You and other critics of our work are peddling a story that we are a bunch of voucher advocates who are biased in favor of school choice and published a report with an inaccurate number that made the voucher program look like a failure. Hmmm, that just doesn’t hang together. If we were such committed voucher advocates we would have never published the artificially low 75% attrition figure. But we did, because we thought it was the correct figure, until we knew it wasn’t, when we corrected it to 56%. Sounds like the behavior of conscientious scholars, not program advocates.

    Yes, I wish we had caught the mistake during proofing, but it wasn’t obvious to us that it was wrong until we did some digging. Program advocates would have seriously doubted the 75% attrition figure but we are not such people so we did not.

    The same could not be said for the Casey Cobb review of our study, which included two blatantly inconsistent program attrition figures (the incorrect one and the correct one) in the same document. How did no one catch THAT obvious error during proofing, and why have voucher critics deliberately choosen to emphasize the 75% figure when the 56% figure was right there in the Cobb review? Curious, isn’t it?

  • Ted Cook says:

    Vouchers are an interesting idea. Why do people spend so much money to go to private schools? Are they that good, or is it the need to get away from the unwashed masses of the public schools?

    Perhaps public schools are homogenization, vouchers promote separation. Do vouchers impove the overall average of the system? I suppose I could believe that. Do vouchers close the achievement gap? I would doubt that. Does the achievement gap need to be closed? Maybe it is impossible to standardize peoples’ intelligence which by nature falls along a broad distribution. How much money would it take to close the achievement gap? If you took the bottom 5% of students, and spent triple the student spending, 30K per year, would you even get passing results? Is this a fools errand?

  • Joan J. Strong says:

    Unfortunately this dust-up with the 56/75 number etc. diverts attention from a very crucial problem with *all* studies involving any sort of choice program: that they are all, by definition, tainted by selection bias.

    Parents who go out of their way to choose a non-default educational program for their children are *always* going to be, on average, parents who produce better outcomes because you eliminate non-engaged parents from your sample.

    Moreover, they miss the fact that the choice school’s overall effect of isolating better students will inevitably make it a better school. The best kids do better when they are in a school confined to only the best kids. This isn’t surprising.

    It doesn’t prove that privatization achieves anything overall, and it doesn’t prove the crucial point that if you moved the *entire system* to a privatized one that it would have better outcomes.

    All it proves is that segregation is great for the ones who are doing the segregating.

  • Albert says:

    Joan is correct about the selection bias that taints many existing studies of choice. However, it’s also incorrect to say that all studies are tainted like that. Using experimental design where admission into a choice school or voucher is awarded by random lottery is increasingly common. Dr. Wolf has reviewed 10 experimental studies of vouchers (http://lawreview.byu.edu/archives/2008/2/90WOLF.FIN.pdf), and there are lots of experimental studies on charter schools (http://jaypgreene.com/2012/12/17/a-guide-for-the-perplexed-a-review-of-rigorous-charter-research/)

  • Comment on this Article

    Name ()


    Sponsored Results

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform