Cooking the Questions?
The 33rd Annual Phi Delta Kappa/Gallup Poll of the Public’s Attitudes Toward the Public Schools
The 33rd Annual Phi Delta Kappa/Gallup Poll of the Public’s Attitudes Toward the Public Schools
By Lowell C. Rose & Alec M. Gallup
Phi Delta Kappa International, 2001.
Support for vouchers is declining. Or so we are told by Phi Delta Kappa (PDK), the nation’s premier association of education professionals, whose annual poll (conducted by Gallup) is widely accepted as the definitive measure of where Americans stand on education issues.
On the surface, it might seem that PDK is just documenting the obvious. After all, two voucher initiatives—one in California, the other in Michigan—were roundly defeated in the 2000 elections, and since then the buzz in education circles is that vouchers have dropped in popularity.
Much of this talk, however, is part of a public relations campaign being waged by the opponents of vouchers, whose aim is to persuade policymakers to stay away from the issue. Such a campaign is to be expected from the teacher unions and their allies, because this is the way the game of politics is played. My fear, though, is that PDK is actively participating in this spin campaign and has been for years.
I cannot read the minds of PDK’s researchers, and I do not want to accuse them of such a thing. But I’m convinced that one of two conclusions is justified. Either PDK’s polls have purposely been designed to reflect negatively on the voucher issue. Or its researchers have been careless in their design decisions—which I doubt.
PDK’s Key Measure
From the 1970s until 1991, PDK measured voucher support with a survey item that defined vouchers as a government-funded program allowing parents to choose among public, private, and parochial schools. After support rose to 50 percent (with 39 percent opposed) in 1991, PDK abruptly dropped this item in favor of a new one. The new question read: “Do you favor or oppose allowing students and parents to choose a private school to attend at public expense?” This question, first asked in 1993, gave results that were strikingly more negative: only 24 percent expressed support (see Figure 1). Indeed, it indicated that even private school parents were opposed to vouchers, a result no expert would be prepared to believe.
Why such different “facts”? Research has long shown that most Americans are poorly informed about public policy and don’t have well-developed views on most issues. Recent polls have shown the same for vouchers. This does not mean that Americans can’t connect the voucher issue to their own values and beliefs. But it does mean that, because they come to any survey with little information, they will be quite sensitive to information contained within the survey itself, especially to the specific wording of questions and the order in which they are asked. This information determines how the issue is “framed.” And the framing, in turn, influences which (of many possible) values and beliefs get activated in people’s minds, and thus how people respond.
All public-opinion researchers are well aware of this. And they all know that, if public opinion is to be well measured on an issue, the issue must be framed with great care. The framing should provide respondents with enough information to give them a good sense of what the issue is about. The information also needs to be balanced, so that respondents are not pushed to see the issue in a positive or a negative light.
PDK’s “at public expense” item does not even come close to meeting these basic criteria. The central purpose of a voucher program is to expand the choices available to all qualifying parents, especially those who now have kids in public schools. But the PDK item does absolutely nothing to convey this information. It says nothing about choice, nothing about public school parents’ being eligible to participate. Instead, it focuses entirely on private school parents and asks respondents whether the government ought to be subsidizing them. Vouchers are presented, in effect, as a special-interest program for an exclusive group.
This is bad enough. But PDK’s researchers compound the bias with the way they choose to inform survey respondents that vouchers are funded by the government. They could have said just that, or they could have found some other neutral way of wording it. Instead they settled on the phrase “at public expense”—which is implicitly pejorative and begs for a negative reaction.
By scientific standards, PDK’s “at public expense” question is a poor measure of voucher support. It should never have seen the light of day. Nevertheless, PDK not only adopted this item as its own, but has persisted in using it in every annual survey but one since 1993—with results that, predictably, are on the low end of what we would expect, given the results of better-worded polls on the subject.
PDK’s Second Measure
Interestingly, soon after the “at public expense” item made its appearance, PDK introduced a second question to measure support for vouchers. This one is actually informative and neutral, precisely the kind of item that should have been used all along. As such, it gives the appearance that PDK is seriously trying to get a valid measure of voucher support. It reads, “A proposal has been made that would allow parents to send their school-age children to any public, private, or church-related school they choose. For those parents choosing nonpublic schools, the government would pay all or part of the tuition. Would you favor or oppose this proposal in your state?”
Year after year, the results from this second item always show higher support for vouchers than the “at public expense” item does. In 2001, for instance, the “at public expense” item produced a support level of 34 percent, while the second item showed support to be 44 percent. In 1999 the comparable figures were 41 percent and 51 percent, and in 1996 they were 36 percent and 43 percent (see Figure 2).
These higher scores from the second support item, however, do not find their way into PDK’s press releases. The media annually turn to PDK as the official source of data on how Americans view the voucher issue, and PDK provides the lower scores from its “at public expense” question. These are the results that show up in the newspapers, on TV, and on the desks of policymakers.
For readers who want to dig deeper, the higher scores from the well-worded item can be extracted from PDK’s longer published report. But even these more believable scores must be interpreted with caution. The reason is that when the survey is actually administered to respondents, PDK always asks the well-worded question immediately after the “at public expense” question. This means that the voucher issue is already negatively framed before respondents get to the well-worded question, so its scores are also likely to be biased downward.
It doesn’t take a rocket scientist to see that there is a problem here. The effects of question ordering are well understood, and competent researchers always think carefully about where each question is placed. It is hard to believe that PDK’s researchers are somehow oblivious to all this and are suppressing the higher scores by accident.
The Gallup Experiment
In January 2001, researchers at Gallup carried out an experiment that was written up and posted on the Internet. Noting that measured support for vouchers seems to vary with question wording, they took the two PDK items as their models and submitted a version of each to separate samples of respondents. Would these items produce different levels of support, and how different would they be?
This exercise was not as enlightening as it could have been. The Gallup researchers did not retain the identical wording of each PDK item, but instead tried to improve the questions by tinkering with words they thought might be sources of bias. In the process, the phrase “at public expense” was actually eliminated from the “at public expense” item—an obvious improvement, but one that compromises the experiment.
The findings, however, are still eye- opening. The new version of the “at public expense” question asked, “Would you vote for or against a system giving parents government-funded school vouchers to pay for tuition at a private school?” This wording at least retained its focus on private school parents. The result: 48 percent in favor, 47 percent against. The second PDK item became the following: “Would you vote for or against a system giving parents the option of using government-funded school vouchers to pay for tuition at the public, private, or religious school of their choice?” This item suggests that the purpose of the program is to expand choices for a broader population of parents. The result: 62 percent in favor.
A few comments. First, PDK’s claim that Americans are turned off by vouchers is simply untrue. When the purpose of a voucher program is well conveyed, most Americans respond positively. In this case, by a very big margin. Second, if this experiment is any indication, the well-worded item on PDK’s own survey is downwardly biased, despite its reasonable wording. When it is not placed immediately after the “at public expense” question, it produces higher support scores. Third, Gallup’s well-worded question produces a support score that is 14 percentage points higher than the special-interest item’s score. The gap would likely have been bigger still if Gallup had retained the pejorative phrase “at public expense.” A reasonable measure of voucher support, fairly tested, gives much higher support scores than the “at public expense” item does (see Figure 3).
The Gallup experiment was available to PDK researchers well before they conducted their 2001 survey. One would think that, in light of this information, objective researchers would have modified their survey, or at least discussed the problems of measurement and interpretation that the Gallup experiment clearly raises. But nothing like this occurred. They designed their items as they always had, and when the results were in, they presented the “at public expense” findings to the media as hard evidence that Americans don’t like vouchers.
Other Voucher Surveys
Over the years, many organizations have carried out surveys that ask questions about vouchers. Their surveys just haven’t received as much attention as PDK’s have.
It would be nice if these studies could somehow yield a single, coherent perspective on the voucher issue, but comparing them with any precision is a tricky business. Each has its own voucher item, its own ordering of questions, and its own range of topics being covered (which usually go well beyond education). All of these differences are likely to influence the results on voucher support. When these influences are combined with chance fluctuations due to sampling error—inherent in all surveys, regardless of how carefully they are designed—it is often impossible to tell exactly why surveys yield different results.
Even so, there is helpful information here. An important pattern in these studies is that voucher questions usually come in two types, which mirror the types we’ve been discussing. The first focuses attention on government subsidies for private school parents. The only real departure from PDK’s approach is that government funding is described in some neutral way, without the pejorative “at public expense”—a phrase that no one but PDK is inclined to use. The second type is worded to suggest that vouchers would expand choices for parents generally and that parents with children in public schools would be part of the program.
Not surprisingly, questions of the second type tend to produce much higher support scores than items of the first type do. Furthermore, they show that a majority of Americans tend to express support for the central purpose of a voucher program. The scores jump around from study to study, for all the reasons I’ve noted, and we are wise not to read too much meaning into particular findings or comparisons. On average, though, the existing studies tend to confirm—many times over—what we already know based on the PDK and Gallup results.
One issue needs addressing, however. Rather often, the survey questions used by these other organizations have been of the first type, which focus on government subsidies to private school parents. The researchers behind these surveys are presumably objective and competent. Why would they word their questions in ways that fail to convey the central purpose of a voucher program?
Again, I can’t read their minds, but here is my best guess. Sponsoring groups like CBS, NBC/Wall Street Journal, and ABC/Washington Post attempt to measure public opinion on a great variety of issues: presidential popularity, gun control, abortion, foreign policy, and many more. Education is but a small part of this, and the voucher issue is just a part of education. However well trained these researchers may be in survey methodology, they cannot be expected to have a nuanced understanding of each and every issue. As a result, they may sometimes adopt wording that seems perfectly acceptable, but that misses the mark.
In my view, that is what’s happening here. After all, the special-interest wording says something about vouchers that is quite true: a voucher program would indeed provide government subsidies to parents who go private. Moreover, this description is simple and short, properties that researchers value as they economize on survey time. So I’m not surprised that items of the first type have proved popular with general polling organizations. The only problem is that this simple, straightforward approach fails to capture the very purpose of a voucher program.
Given their diverse responsibilities, these researchers can be cut some slack. Their measures are inappropriate, but I suspect they would change them if they gained new perspective on the issue. I can’t say the same for PDK’s researchers, however. They are responsible, every year, for putting together a survey that deals entirely with education. They know the voucher issue only too well, and they have chosen to measure it in a way that is guaranteed to elicit low numbers.
Has Support for Vouchers Declined?
Now let’s return to the issue with which this essay began, the issue of whether there has been a marked drop-off in voucher support over the past few years. PDK claims to have discovered such a downturn. Here is an excerpt from its 2001 press release:
It is clear that the decade of the ’90s saw support for the use of public funds for parents and students to use in attending private and church-related schools increase, peak, and then begin what has become a significant decline. Support in this area was at 24 percent in 1993, climbed to 44 percent in 1997 and 1998, and has since dropped to the current level of 34 percent.
The figures, of course, are taken from PDK’s “at public expense” item, which is a terrible measure to begin with and consistently gives low scores. Even so, this item has been asked in identical form over time, and the fact that the findings seem to fit a pattern is at least interesting. The question is, does this pattern stand up to scrutiny?
The first point to keep in mind is that survey results are likely to fluctuate from year to year by chance alone. Annual shifts of a few percentage points in either direction may be quite meaningless, and analysts have to resist the temptation to overinterpret. Especially in a short time series, random fluctuations can sometimes look like patterns or significant events, when in fact nothing about public opinion has changed.
Based on other surveys, there does seem to have been a real (nonrandom) increase in support from the early 1990s to the mid-1990s, as PDK claims. But beyond the mid-1990s, the evidence does not suggest any pattern at all. In particular, it does not suggest that vouchers have gone into “significant decline” in recent years.
Gallup’s own surveys are telling. They are well worded and so should give good measures of how people respond to the basic purposes of a voucher program. Yet there is no indication, by these measures, that support has budged much over the past five years. In 1996 Gallup tested support for vouchers on two separate occasions, and each time came up with support scores of 59 percent. In 2000 it surveyed opinion using the identical question, resulting in support of 56 percent. And in 2001—the experiment—it used a virtually identical question and found support to be 62 percent (see Figure 4). All this is consistent (given normal sampling error) with the notion that public opinion has not changed—and it is surely inconsistent with PDK’s claim that vouchers have gone into “significant decline” in the past few years.
Consider another example. In two recent NBC/Wall Street Journal polls, respondents were asked to choose between the following positions. “Position A: Government should give parents more educational choices by providing taxpayer-funded vouchers to help pay for private or religious schools. Position B: Government funding should be limited to children who attend public schools.” In 1999 the results were 47 percent in favor of vouchers, 47 percent against. But in 2000—at a time when vouchers were supposedly dropping like a stone in popularity—the results were 56 percent in favor and 38 percent against (see Figure 4).
I could provide more examples of vouchers’ increasing in popularity over the past few years. I could also find examples of vouchers’ declining somewhat in popularity. All of this, however, is quite normal and precisely what we ought to expect if the underlying reality hasn’t changed much. Due to sampling error alone, some numbers will go up and others will go down. But the fluctuations probably don’t mean much of anything.
PDK Changes the Survey
This is true of voucher surveys in general. The recent plunge in PDK’s own measures of voucher support, however, cannot be chalked up to sampling error alone. There is a concrete reason, I believe, why PDK’s results have so dramatically gone south. The reason is simply this: PDK has recently changed its survey.
Before 2000, PDK followed a format in which respondents were asked to give the public schools a grade from A to F and then were presented with the two voucher items. In 2000, however, PDK altered the survey in a way that any competent researcher would expect to be consequential. Respondents were asked the same A to F grading questions, but then—immediately before the key voucher items—they were asked five additional questions that surely had important framing effects.
The first two essentially set up a dichotomy between vouchers and the public schools. The second of them asks, “Which one of these two plans would you prefer—improving and strengthening the existing public schools, or providing vouchers for parents to use in selecting and paying for private and/or church-related schools?” PDK is thus clearly suggesting to respondents that people who support public education—as most Americans do—cannot at the same time support vouchers. From a framing standpoint, this is a killer. It is also factually incorrect. Most activists in the voucher movement are dedicated to improving the public schools, and they see vouchers as a powerful means of effecting improvement through greater choice and competition.
The next three new items are also problematic. Two focus attention on a long list of public-school ideals. The third contrasts parental choice with other “possibilities”—like rigorous academic standards and competent teachers—again giving the impression that they are alternatives to vouchers rather than (as is in fact the case) entirely complementary.
There is little doubt, in my view, that the introduction of these five new items just prior to the usual voucher items produced a more negative framing of the voucher issue and encouraged the lower support scores that became PDK’s findings in 2000. The same sort of thing happened in 2001, except in that year PDK’s researchers changed the survey again. This time, they eliminated three of the five lead-in items, and included just the two killer items: the ones that portray vouchers as antithetical to public education (see Table 1). Whether this lead-in is more or less negative in its framing than the 2000 lead-in is unclear. I suspect it is more negative, because it is so simple and forceful and avoids all the distractions of the other three items. In either event, it is surely more negative than the original framing from 1999 and before, and there can be no surprise that it again led to lower support scores.
The most reasonable conclusion, therefore, is that the “significant decline” in voucher support—loudly proclaimed by PDK and reported by the media as fact—is an artificial phenomenon of PDK’s own making. The important changes didn’t occur in public opinion. They occurred in the design of PDK’s survey—a factor that, needless to say, is under the conscious control of PDK’s own researchers.
Molding Public Opinion (Table 1)
It’s no wonder that the public’s support for vouchers was so low on Phi Delta Kappa’s most recent opinion survey. The survey asked a series of questions that framed education reform as a false trade-off between vouchers and fixing the public school system.
Survey Question #5: In order to improve public education in America, some people think the focus should be on reforming the existing public school system. Others believe the focus should be on finding an alternative to the existing public school system. Which approach do you think is preferable—reforming the existing public school system or finding an alternative to the existing public school system?
Survey Question #7: Do you favor or oppose allowing students and parents to choose a private school to attend at public expense?
Survey Question #8: A proposal has been made that would allow parents to send their school-age children to any public, private, or church-related school they choose. For those parents choosing nonpublic schools, the government would pay all or part of the tuition. Would you favor or oppose this proposal in your state?
I do not want to believe that PDK is using its survey to further its own political agenda. But what is the alternative? That PDK’s researchers have simply been careless in their design decisions? That these decisions have by sheer accident led to lower support scores for vouchers? That the most biased of these scores have unwittingly been urged on the media as good-faith evidence of American public opinion? This is a lot to swallow. It is much more reasonable to believe that PDK’s researchers are competent at their jobs and that they have not been making one mistake after another.
Whatever the explanation, one conclusion is sure. With the public relations campaign against vouchers in full swing, it is important for people who want the facts about public opinion to look askance at this most official of all education surveys. On the voucher issue, its findings are not to be believed.
Having said this, I don’t want to leave the impression that, when the truth of the matter is revealed, Americans turn out to be wild about vouchers. Granted, well-worded survey items do show majority support for the idea. This is very important. But in a population that is poorly informed about public policy, the positions people take on surveys are often soft—and complicated—and need to be evaluated with care.
This is what I have tried to do in my recent book, Schools, Vouchers, and the American Public. One of its central themes is that Americans are on both “sides” of the voucher issue at once: they like the public school system and they are positively inclined toward vouchers. In their minds (and mine), there is no reason one can’t be supportive of both, no reason vouchers can’t coexist with and promote a healthy public school system. Problems arise, however, when people are told—as they are during initiative campaigns, through massive media blitzes by the teacher unions—that vouchers will destroy the public schools. Faced with such draconian framing, and responding to the uncertainty that it creates, many people back away from vouchers and embrace the current system. They don’t want to lose what they have.
Support for vouchers, then, is a complex matter that cannot be summed up in simple survey numbers. I suspect, moreover, that this complexity may take a still different form over the next year or so, as the tragedy of September 11 generates an upswing in support for government institutions, along with an increasing aversion to change and uncertainty—all of which could show up (temporarily) in the way people respond to survey items about vouchers.
Whatever the future holds, however, the challenge for those of us who are studying this issue—regardless of which side we’re on—is to understand what Americans are really thinking. This means insisting on well-designed surveys. And on facts that can be believed.
–Terry M. Moe is a professor of political science at Stanford University and a senior fellow at the Hoover Institution.
Sign Up To Receive Notification
when the latest issue of Education Next is posted
In the meantime check the site regularly for new articles, blog postings, and reader comments