Dodging the Questions

Somehow I expected more. When I challenged Phi Delta Kappa (PDK) and Gallup’s claim that they had discovered a “significant decline” in voucher support, I figured they would respond with detailed justifications of their procedures and findings. But they haven’t done that. Their response reads more like an exercise in public relations than a serious attempt to deal with the issues I’ve raised.

Lowell Rose and Alec Gallup, writing the official response for PDK, begin by setting out their annual data from 1993 through 2001 on support for school vouchers. As anyone can see, the data seem to show that support for vouchers has “significantly declined” in recent years. The authors note that, while I think the first of their two questions is biased, I regard the second as well worded. Yet this second item documents the same trend as the first: that voucher support has dropped considerably.
This is a useful lead-in for them because it allows them to show, visually, that their numbers really do go down. But however obvious the trend might look, it does not prove a thing. The reason we are having this debate is that the data are in dispute—and, if I am right, are not to be believed.

I do happen to think their second question is well worded. But that doesn’t mean I think the support scores it generates are valid. In fact, I think they are not valid—and I say so quite clearly in the article and give my reasons. The key problem is that PDK/Gallup changed the survey in recent years, and these internal changes—not changes in public opinion—are probably responsible for the drop-off in numbers.

Rose and Gallup ignore all this. Right from the start, they fail to deal with or even acknowledge the argument I actually make.

Shifting Measures

I began my article by noting that from the 1970s until 1991 PDK measured support for vouchers with a question that portrayed vouchers as a government-financed program allowing parents to choose among public, private, and parochial schools. By 1991 public support for vouchers had risen to 50 percent (with 39 percent opposed) on this measure—whereupon it was unceremoniously dumped by PDK in favor of a new question. This one asked, “Do you favor or oppose allowing students and parents to choose a private school to attend at public expense?” The findings from this new measure were—surprise!—astonishingly more negative. In 1991 it yielded a support level of just 26 percent; in 1993, an abysmal 24 percent.

It is only reasonable to ask, Why did PDK dump its traditional voucher item? Why did it embrace the “at public expense” item as an alternative? And, as both items are allegedly measuring support for vouchers, what accounts for their strikingly different results?

Rose and Gallup ignore the whole thing, thus providing no objective rationale to dispel the unavoidable suspicion that the “at public expense” item may have been favored precisely because it gives much lower support scores. They had a chance to provide such a rationale here, and they chose not to.

The “At Public Expense” Item

I go on in the article to give reasons why the “at public expense” item is a downwardly biased measure of voucher support. Most people are poorly informed about vouchers (and other policy issues as well) and are thus heavily influenced by the wording and ordering of survey questions. Questions must give people enough information to know what the policy issue is about, and the information must be balanced. The “at public expense” item fails on these counts.

First, the central purpose of a voucher program is to expand choices for all parents, especially those with kids currently in public schools, but the PDK item doesn’t convey this information. Instead, it cryptically asks whether parents should be allowed to send their kids to private schools at public expense—which, especially for ill-informed respondents, tends to frame vouchers (implicitly, through the images it evokes) as a special-interest program for private school parents rather than as a program of expanded choice for parents generally. Second, the phrase “at public expense” is a pejorative way of referring to government financing, likely to elicit more negative responses still.

Here Rose and Gallup have an opportunity to argue that their “at public expense” item is not negatively biased, that it is indeed an excellent measure, preferable to others PDK might have used. But they don’t do this. Instead, they begin by recounting my brief definition of the central purpose of a voucher program (which they don’t dispute) and by claiming that it is inappropriate to convey such information in a survey question: “The purpose of an opinion poll is to survey public opinion based on the information the public has at the time; it is not to educate the public.”

This claim may sound authoritative, but it actually makes no sense. If it were true, PDK would be best off simply asking its survey respondents, “Do you support school vouchers?” Period. No information about what vouchers are. No attempt to educate. No nothing. But of course PDK doesn’t do that, nor does any other polling organization. The reason is that most Americans wouldn’t know what to make of such a bare-bones question. They need some kind of information to let them know what a voucher program is so that they can get their bearings and express an opinion. In practice, whether Rose and Gallup want to admit it or not, the “at public expense” item is an attempt to provide respondents with information that tells them something about vouchers. What Rose and Gallup should be arguing is not that survey items have no business conveying basic information, but rather that the information their “at public expense” item does provide is entirely appropriate and unbiased. This is an argument they never make.

Instead they offer a lame diversion. They note that, in their 1997 poll, they tested whether a change in wording to “at government expense” would lead to different results. And it did. The responses were more positive toward vouchers (by 4 percent). They go on to speculate that, had they changed the wording to “at taxpayer expense,” the shift in results would have been negative. They embraced the “at public expense” item, they say, because it “represents an effort to chart a middle course and avoid bias”—which presumably justifies their choice of measures.

What kind of logic is this? There are many ways that support for vouchers might be measured and, within each, many ways that the element of government financing can be worded: “government-funded,” “publicly funded,” “with government paying all or part of the tuition,” and so on. The question is not whether “at public expense” is preferable to “at government expense” or “at taxpayer expense”—phrases that are themselves pejorative in tone. The question is whether it is preferable to all the other, more neutrally worded possibilities. Which they never even consider. So once again, Rose and Gallup avoid the real issues here. Their justification is no justification at all.

Although they don’t tell us why the “at public expense” item is a good measure, they do offer a concrete reason for continuing to ask it year after year without modification. Their answer: “to preserve the trend line the question had established.” If this were so important to them, however, why did they dump their traditional voucher item in the first place? It provided a long series of data stretching back to the early 1970s and represented the best single source of information—anywhere—on the historical development of public attitudes toward vouchers. Yet this time series was abruptly ended, and its value entirely lost, when PDK/Gallup shifted to the “at public expense” item. It’s a little hard to accept, then, that this latter item has been continued over the years in order to “preserve the trend line.” Its very adoption ended the best trend line we had.

The Second Item

Since 1995, the PDK/Gallup poll has included a second voucher item in addition to the “at public expense” item. This one, interestingly, is well worded and does convey the essence of a voucher program, asking respondents to consider a proposal that “would allow parents to send their school-age children to any public, private, or church-related school they choose.” It is a mystery why PDK thought it needed such an item, given its faith in the “at public expense” item and given its belief that no information about the purpose of vouchers (expanded choice) should be provided.

Be this as it may, we should expect this item to yield higher voucher support scores than the negatively biased “at public expense” item. And it does, consistently, year after year. Yet it doesn’t yield support scores that are as high as one might expect, given the findings of other surveys. A possible explanation, I argue, is that this item is always asked immediately after the “at public expense” item on the PDK survey—and thus after the voucher issue is already framed in a negative way. Responses to the second item, then, may be downwardly biased as well.

Rose and Gallup have nothing to say about this. Instead, they lead us into a Kafkaesque world of bureaucratic obfuscation. They say that I misunderstand the relationship between PDK and the Gallup Organization. While I talk in my article as though PDK has researchers that design surveys, the researchers and design decisions are all Gallup’s. By shifting the responsibility in this way, Rose and Gallup have a rationale for not responding to the basic points at issue and (implicitly) kicking the ball over to the Gallup Organization, which has “exclusive responsibility.” Yet Gallup does not pick up the ball and run with it. In fact, the official Gallup response, written by senior editor David Moore, is all of three paragraphs long, and has nothing to say about most of the issues.

I am not privy to inside information about how PDK and Gallup design their annual survey and cannot know who is responsible for what. But I do know this. The rejoinder by Lowell Rose and Alec Gallup was submitted to Education Next as the official response of PDK. Yet Alec Gallup is the cochairman of the Gallup Organization. His name is not only on this official PDK response; it has also been on every PDK/Gallup annual poll since 1986. And PDK and Gallup have teamed up to produce these polls every year for the past 34 years. In the final analysis, it doesn’t make any difference who is responsible for the design decisions. The decisions were made, and these authors—Gallup in particular—are in a position to know how and why the decisions were made as they were. As is David Moore.

Getting back to substance: Why is the well-worded question always asked after the “at public expense” item, and what are the likely consequences? Rose and Gallup are silent, but Moore offers a response (if ever so brief). He admits that placing the well-worded item after the “at public expense” item could downwardly bias the survey responses. But he says that the initial ordering (in 1996) was determined “as a matter of chance,” and that since then the ordering has been kept intact “to protect the integrity of the trend data they produce. Otherwise, any changes in results could be due to a change in the order of the questions, not to any real change in public opinion.” (Remember this quote, folks.)

Moore is right to say that the ordering of questions is crucial. Precisely because this is so, however, it is hard to fathom why Gallup would leave the ordering of these two survey items to chance. In 1993 the “at public expense” item (asked alone) yielded a support score of 24 percent. Just a year later, in 1994, the well-worded item (asked alone) yielded a support score of 45 percent. This is a huge difference, and it clearly shows that the “at public expense” item portrays vouchers in a far more negative light than the well-worded item does. Asking one item right after the other on the same survey, then, could well affect the way people respond to whichever item comes second. I would think, in light of this, Gallup researchers would have pondered the question-ordering issue long and hard—and considered, as well, not putting the two on the same survey, or at least asking them in different sections of the survey, far apart. The notion that they just flipped a coin, and that the coin flip happened to favor putting the negative item immediately before the well-worded item, is difficult to accept.

Changes in the Survey

The issues I’ve discussed thus far are important to an evaluation of the “facts” generated by PDK’s surveys. Still, they cannot explain why both the well-worded item and the “at public expense” item would register a sudden drop in voucher support over the last few years. Even if the measures are biased, the fact that their content and ordering have remained constant over the whole time period means that something else must have happened—some change—to cause scores to plummet in recent years. According to PDK, this something is that public opinion has changed: Americans are much less sympathetic toward vouchers than before. What PDK doesn’t say, however, is that the survey itself was changed—and in ways that almost surely caused their findings on voucher support to shift downward.

Here again, the issue is one of question ordering. Before the 2000 survey, the two voucher items were immediately preceded on the survey by items asking respondents to grade the public schools from A to F. But in 2000, PDK injected five additional items between the grading items and the two voucher items. These new items dramatically changed the lead-in to the voucher items in several ways. Most important, they portrayed vouchers as a stark alternative to the public school system and implied that people who support the public schools—which, of course, most people do—cannot at the same time support vouchers. In 2001 PDK compounded the confusion by changing the survey again, deleting three of the five items, but keeping the two items that portray vouchers as being opposed to public schooling.

Any competent researcher would expect such changes in question ordering to affect responses to the voucher items. It is obvious, moreover, that the effects should be negative, and that they could easily account for the “significant decline” that PDK attributes to public opinion in 2000 and 2001. I should note that they cannot account for the fact that the “at public expense” item (but not the other item) seems to begin its decline in 1999, a year before the survey was changed. But it would be a mistake to attribute much significance to this. The PDK/Gallup survey, like all surveys, is subject to sampling error; even if there were no change in public opinion whatsoever, its support scores could randomly vary by several percentage points from year to year. Indeed, the “margin of error” for this survey is plus or minus 4 percent. There is little basis, then, for asserting that a decline clearly began in 1999. There is a strong basis, however, for expecting a “significant decline” after that point, simply due to the survey’s change in question ordering.

This is perhaps the key point in my entire argument, and it should have been the prime focus of the PDK and Gallup responses. But it wasn’t. After asserting early on that the Gallup Organization is “continuously alert to the possibility of -order bias,’” Rose and Gallup tell a story about how the new questions found their way onto the survey—but remarkably, they have nothing to say about why these items were placed immediately before the voucher items in 2000 and 2001, and do not even acknowledge that doing so threatened to bias the voucher responses.
Even more remarkable is Moore’s official response for Gallup. As I noted earlier, he explicitly acknowledges that changes in question ordering can lead to changes in results (recall the quotation I asked you to remember), and he uses this fact to explain why Gallup continued to place the “at public expense” item before the well-worded item year after year. Yet by changing the lead-in questions to these voucher items, Gallup had done exactly what Moore said should not be done. Egregiously. There can be no justification for this and, faced with a hopeless contradiction, Moore’s response is simply to ignore it. On this most pivotal of issues, he has nothing at all to say.

He does, at least, have something to say about a related issue. In my article, I argued that other surveys, including surveys by Gallup itself, do not support the “significant decline” thesis. Moore ignores the surveys carried out by other organizations, but he does make a point of rejecting my argument about Gallup, saying, “Apart from the trends in the PDK/Gallup polls, the Gallup Organization has asked no other series of voucher questions repeatedly during the 1990s.”

This sounds definitive, as Moore is the spokesman for Gallup, and Gallup must know its own surveys. Yet his statement is carefully parsed. It is true that Gallup did not ask a “series” of questions “repeatedly” over this entire time period. However, in 1996 Gallup asked the same well-worded voucher question on two separate occasions, leading in both cases to support scores of 59 percent. It asked the same question again in 2000, producing a support score of 56 percent. And it asked an almost-identical question in 2001, producing a support score of 62 percent. These data were generated by Gallup’s own surveys, just as I claimed, and they are clearly not compatible with the “significant decline” thesis. They suggest, as I think the larger body of survey evidence does, that public support for vouchers probably has not changed much over the past five years.

Moore should have recognized the internal conflict within Gallup’s own data, seen the incompatibility as an important (and genuinely interesting) issue, and dealt with it. But he chose to avoid the whole thing.

The Gallup Experiment

In January 2001 Gallup’s own researchers conducted an experiment to see if question wording affects survey results on the voucher issue. They began with the two PDK items as models, made a few modifications (in which they dropped the “at public expense” phrase from one item, not surprisingly, but kept its focus on subsidies for going private), and presented each item to separate samples of respondents. The results: voucher support was 48 percent as measured by the modified “at public expense” item, but 62 percent (the figure I used above) as measured by the well-worded item, whose content was virtually the same as before.

This experiment is obviously of direct relevance here. It shows that there is a huge difference between the two measures when they are asked separately. It shows that Gallup’s own researchers think the phrase “at public expense” ought to be dropped. It suggests that support for vouchers may be much higher than PDK has been claiming and that something seems to be suppressing responses to the well-worded question on the PDK survey. It contradicts the claim that support for vouchers has “significantly declined.” And the entire experiment was a Gallup operation.

Yet Rose, Gallup, and Moore don’t even acknowledge that this experiment was ever carried out. Worse, they assert that there are no Gallup surveys, aside from the PDK / Gallup poll, that are of any relevance here—and that I have misled everyone. What can they possibly be thinking?

Conclusion

The issues I’ve raised are objective issues of survey methodology, having to do with the content and ordering of questions. PDK and Gallup should have responded by taking my charges one by one and dealing with them in a serious, scientific way. This is normal procedure in any area of research: researchers do studies, other researchers respond with objectively based criticisms, the former respond in turn, and through such back-and-forth exchanges the research community—and society—moves toward a better understanding of the world.

PDK and Gallup have shown little interest in participating in such a process. I am heartened that they plan to carry out a test to explore one of the issues: whether the second voucher item is biased downward by being asked right after the “at public expense” item. But this is a small step, and not really critical to our debate. It does nothing to explore the bias of the “at public expense” item itself. Most important, it does nothing to determine whether the changes they made to their survey in 2000 and 2001 biased their results.

All in all, PKD and Gallup simply haven’t dealt with the issues. Their responses here are largely vacuous. Even so, our exchange will have been worth it if PDK and Gallup design their future surveys with greater sensitivity to the need for justifiable measures and methods—and with some sense that, if they don’t, people will call them to account. That is the way all other researchers live their professional lives. PDK and Gallup should have to do the same.

-Terry Moe is a professor of political science at Stanford University and a senior fellow at the Hoover Institution.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College