Can Education Polls Be Scientific? Or Is It All Interest Group Politics?
Three polls have come out within the past week: the Education Next (EdNext) poll, the Associated Press poll (about which I have commented previously), and, now the Phi Delta Kappan (PDK) poll, published by a journal with close ties to schools of education across the country. Readers of the three polls have noticed some differences in the results and they wonder why that should be in an age when polling is supposed to be scientific.
Sampling the public can be done pretty accurately by sophisticated polling firms, and all three of the just-released surveys have that in common. But even though sampling can be done in a scientific manner, question formulation in survey research is an art form. Still, there are some rules that responsible pollsters generally follow. When information is provided to respondents, it should be as neutral in tone as possible. If it is difficult to find neutral language, then the positions on both sides of the issue are set forth so that one point of view is not favored over the other. A range of response options is generally preferred over a stark dichotomy, because the offer of only two choices may force the respondents to state their position with less nuance than they prefer. In recent years, scholars have increasingly offered a neutral position rather than forcing people to say they “don’t know” when they prefer not to take sides on a topic.
I have always admired the PDK poll. For one thing, it has asked the same question about the nation’s schools and local schools every year since the 1980s—and I am making use of that poll in my forthcoming Brookings book, Teachers vs. the Public, by Michael Henderson and Martin West and myself. On this question, EdNext and PDK pretty much agree. EdNext finds that 21 percent of the public thinks the nation’s schools deserve an A or a B, while PDK finds the percentage is 19 percent. As for local schools EdNext finds 49 percent thinking they deserve one of the two top grades, while PDK finds the percentage to be 53 percent. That the two polls get much the same response to quite similar questions suggests that both polls are collecting information from a representative sample of the U. S. public.
Quite apart from quality sampling, the 2013 PDK poll is informative on many issues about which EdNext did not poll. But when certain hot button issues are at stake, PDK sometimes listens more to its heart than its head.
But you should judge for yourself. Here are some differences between the EdNext and PDK results, with a brief interpretation of the discrepancy.
EdNext: “As you may know, all states are currently deciding whether or not to adopt the Common Core standards in reading and math. If adopted, these standards would be used to hold the state’s schools accountable for their performance. Do you support or oppose the adoption of the Common Core standards in your state?”
PDK: “Do you believe Common Core State Standards would help make education in the United States more competitive globally, less competitive globally, or have no effect globally? (Asked only of those who have heard of the Common Core).”
Interpretation: EdNext describes the Common Core then asks respondents whether they support the initiative. We pose questions in this way because we think many people may support a policy without necessarily knowing the code words familiar to Beltway insiders. PDK did not provide a description of Common Core but only asked people whether they knew of the code words. Only 38 percent did. They then asked that 38 percent what they thought. When you drop nearly two-thirds of all respondents, you no longer have a representative sample of the American public. Further, PDK asked respondents to predict the future: Would adoption of the Common Core make the U. S. competitive? When questions are asked in that way, apparent support for an initiative will drop, because many people might support the idea even though they cannot predict its effects.
In short, I believe that on this one PDK fished for the answer they wanted.
Trust and Confidence in Teachers
EdNext: “How much trust and confidence do you have in public school teachers?”
|Complete Trust and Confidence||6%||10%|
|A Lot of Trust and Confidence||36||29|
|Some Trust and Confidence||47||48|
|Little Trust and Confidence||11||13|
PDK: “Do you have trust and confidence in the men and women who are teaching children in the public schools?”
Interpretation: EdNext asks straightforwardly about teachers. PDK encourages a positive response to its question by talking about the “men and women who are teaching children,” using evocative words such as “children” and hinting at that famous patriotic phrase—the “men and women who serve in our armed forces.” The response categories in the two surveys are also different. By offering four possible responses, EdNext allows the respondent to provide a nuanced response to the question. When that is done, only 42 percent of the public have “a lot of” or “complete” trust and confidence in public school teachers. PDK forces people to say they do have confidence unless they have “no confidence” in teachers, a polling strategy that will increase the proportion of positive responses.
EdNext: “As you may know, many states permit the formation of charter schools, which are publicly funded but are not managed by the local school board. These schools are expected to meet promised objectives, but are exempt from many state regulations. Do you support or oppose the formation of charter schools?”
|Neither Support nor Oppose||24|
PDK: “As you may know, charter schools operate under a charter or contract that frees them from many of the state regulations imposed on public schools and permits them to operate independently.”
Would you support new public charter schools in your community?
Interpretation: In the case of charter schools, both EdNext and PDK provide an objective description of charter schools before posing questions about charters. EdNext provides five response categories, including the option to stay neutral (neither support nor oppose).
Once different construction of response categories are taken into account, the two polls do not differ significantly if those in the neutral position split evenly between pro- and anti-charter when encouraged to do so, as is the case in the PDK poll.
Splitting the neutral evenly in the EdNext poll produces 71 % support, 36 % opposed, a result that is probably within the margin of statistical error of the PDK finding.
EdNext: “A proposal has been made that would use government funds to pay the tuition of low-income students who choose to attend private schools. Would you favor or oppose this proposal?”
|Neither Favor nor Oppose||13|
PDK: “Do you favor or oppose allowing students and parents to choose a private school to attend at public expense?”
Interpretaton: EdNext finds public opinion closely divided on the issue, whereas PDK finds a better than 2:1 split against vouchers. EdNext’s voucher question uses neutral terminology, perhaps even slightly negative terminology as it talks about “use government funds to pay the tuition.” (In another question, EdNext uses more positive language when inquiring about universal vouchers.) As I recall, the mildly negative language quote above was originally used by PDK itself in poll administered several years ago. In recent years, PDK has become more aggressive in loading its wording against vouchers by using the phrase “at public expense,” a standard phrase used when castigating someone or something. (I recall someone complaining about some public servant by saying: “He traveled to Antarctica to see the penguins “at public expense,” no less!”) I find a deliberate attempt to load this question in a clearly negative direction to be an unfortunate lowering of PDK standards. I hope PDK rethinks this policy in the future—or at least asks different wordings of the voucher question to split halves of its sample.
EdNext: “Do you favor or oppose basing the salaries of teachers, in part, on their students’ academic progress on state tests?”
|Neither Favor nor Oppose||12|
PDK: “Some states require that teacher evaluations include how well a teacher’s students perform on standardized tests. Do you favor or oppose this requirement”
Interpretation: EdNext finds a plurality favoring the use of tests in the determination of teacher salaries, while PDK finds that the public does not think test information should be used to evaluate teachers.
Very likely, the difference is due to the fact that EdNext refers to “academic progress on state tests,” while PDK refers to how well students “perform on standardized test.” PDK’s factual assertion in this question is probably erroneous. It says “some states require that teacher evaluations include how well a teacher’s students perform.” The correct wording would have been “include how well students progress” on standardized tests. Can anyone identify any state that evaluates teachers on the basis of student performance without adjusting for prior student performance?
—Paul E. Peterson
Sign Up To Receive Notification
when the latest issue of Education Next is posted
In the meantime check the site regularly for new articles, blog postings, and reader comments