Just last month, Mark Schneider wrapped up his six-year term as the director of the Institute of Education Sciences. At IES, he was charged with overseeing the nation’s education research efforts, including such well-known efforts as the National Assessment of Educational Progress and the What Works Clearinghouse. Before assuming his role at IES, Mark was a vice president at the American Institutes for Research, commissioner at the National Center for Education Statistics, and spent many years chairing the political science department at SUNY-Stony Brook. Having known Mark for many, many years, I was interested in his reflections on his tenure at IES. Here’s what he had to say. (This is Part One of a two-part interview, the second of which is scheduled to be published on Wednesday.)
—Rick Hess
Rick Hess: You’ve just wrapped up your tenure as director of the Institute of Education Sciences. What’s the state of American educational research?
Mark Schneider: When I was a political science professor at Stony Brook University, fights over resources often pitted the “hard sciences” against the social sciences. At faculty meetings, I would use James March’s observation that “God gave all the easy problems to the physicists.” That would lead to plenty of eye-rolling from everyone but the social science chairs and faculty. But the fact is that humans have far more agency than electrons, and so many of our most beautiful and parsimonious models crash and burn when tested against real-world data. Our work is difficult, and we need to face the reality that most of our ideas, hypotheses, and interventions will not stand up to empirical reality. Indeed, IES-supported interventions have a “hit rate” that hovers around 15 percent, meaning most things don’t work. Depressing, but that success rate is not much different from most other human-centered fields. Indeed, by some estimates, only 10 percent of clinical trials succeed.
Over its 20-year history, IES has developed and perfected a standard “business model.” I summarize it as: “Five years, five million dollars, failure,” which reflects our standard grant length, funding level, and outcome. This may sound like an indictment of how IES has supported the education research “industry,” but we need to understand the reality we face. And we need to embrace the idea that “the only failure in failure is the failure to learn.”
Hess: Given the challenges you’ve just sketched, how have you sought to shift IES efforts?
Schneider: I have tried to push IES to change its standard business model. Most notably, there are now a growing number of large learning platforms—IES’s SEERNet among them—that let researchers access large numbers of students, enable rapid-cycle experiments, and support more frequent replications. IES’s mission is to determine “what works for whom under what conditions.” This can only be achieved by replicating work in different audiences as a routine part of our research. Of course, other sciences have supported replications for years. Many now face a “replication crisis,” in which other research teams—or even the research team that conducted the original study—cannot verify findings. We have not been pursuing replication work long enough to generate a replication crisis in our field, but I look forward to that day.
The field is also beginning to face the problem that even after we identify practices and programs that “work,” we do not scale those interventions by moving from, say, 200 students in an experiment to 2,000 students in a school district to 2 million students in the nation. Without paying far more attention to scaling, we will never affect enough students to achieve our goal of creating a strong democratic citizenry capable of earning family-sustaining wages.
Hess: As director, what’s been your biggest frustration with the education research community?
Schneider: Education science, like many disciplines, is overly affected by jargon and fads. We are an applied science charged by law with helping legislators, educators, parents, and students understand what we are saying and how to join us in implementing changes. But we often fail to translate our work into plain, accessible English. I have tried to impose several policies to make IES’s publications and reports more user-friendly—for instance, by imposing a 15-page limit on our reports. But perhaps the biggest indictment of the field is that the writing in education research reports, including IES’s own, is often amazingly poor.
“Short sentences. Strong verbs.” That’s the approach I ask people to bring to their writing, yet we rarely hit the mark. Perhaps the problem is that so many IES staff and contractors have Ph.D.s, so they think a greater number of poorly crafted reports is better than fewer tightly focused ones. As a result, our journals drip with incomprehensible jargon and big words that signify little.
Hess: What kinds of changes could help on this count?
Schneider: For several years, I have been pushing for the creation of a new center in IES focused on informed risk, high reward, and rapid-cycle research. Like the establishment of centers throughout the federal government modeled on the Defense Advanced Research Projects Agency, or DARPA, the new IES center—currently called the National Center for Advanced Development in Education, or NCADE—would breathe life into a large segment of IES’s research expenditures. The creation of NCADE would require an act of Congress.
We also need to continue updating the SEER principles. The Standards for Excellence in Education Research lay out some best practices that would guide the education science field to produce stronger work, plus some principles that are specific to education science research. Should the next director continue to refine these, I believe that our field will be far better off than it was earlier in IES’s life—and even more so when comparing the quality and the rigor of today’s work with that of the pre-2002, pre-IES world.
Hess: You argue eloquently for new research centers within IES. But IES already has four of them. What makes you confident that a new one, NCADE, would fare any better? Put another way, why can’t the existing centers just do the research that you’d ask NCADE to do?
Mark: This is the hardest question to grapple with when considering the need for NCADE. I sometimes think that if COVID hadn’t punched a two-year hole in my tenure, I might have been able to make enough changes in the culture and the practices of IES’s two existing research centers to alleviate the need for NCADE. But changes in personnel, practices, and especially culture in a federal bureaucracy, even a relatively small one such as IES, are among the most persistent challenges any reform-minded leader faces.
I can document many barriers that are just hard to overcome. It is not surprising that a time-honored solution to such inertia is to create a new bureaucracy in which modern practices and new personnel can be brought in to drive new initiatives. On an abstract level, do we really need NCADE? Maybe not. But if we want to modernize and streamline education science research, NCADE is, I believe, the best and fastest way to bring about changes the field—and the nation—needs.
Hess: You’ve made a priority of accelerating the speed at which we release federal education data and research. What’s the problem and what’s holding up efforts to do better?
Schneider: The lack of timeliness is the single most common complaint that IES fields year after year. Many of these complaints come directly from Congress. Indeed, when the Senate HELP Committee drafted a reauthorization of the Education Sciences Reform Act, the senators added a new paragraph about the responsibilities of each commissioner with the same title: “timeliness.” Our research centers, for instance, are largely focused on measuring how students grow and learn—which can take years. On top of that, we often fail to push for timely publications and product development once that research is complete.
Meanwhile, IES’s National Center for Education Statistics, or NCES, has other problems that slow down the release of its work. Officials at NCES argue that they simply don’t have enough staff to produce timely products. And maybe they don’t. But they also do not seem to understand their audience’s need for access to fast and reliable information.
Researchers like Marguerite Roza, Emily Oster, and Sean Reardon are helping to fill in the gap by standing up work that NCES could and should do. The signature example is how slow NCES has been in releasing critically needed information on how schools and districts are spending money. There will always be a lag between data collection after a fiscal year ends until it can be released—but the lag has often been years rather than months. I don’t think that’s a result of limited staffing. I think it’s a result of limited vision.
Part of my push for NCADE has been to break the mindset of too many IES staffers and to bring in new blood committed to identifying critical shortages in education research and pushing for more rapid solutions, including greater sensitivity to the needs of research consumers—which would almost always mean faster turnaround.
Frederick Hess is director of education policy studies at the American Enterprise Institute and an executive editor of Education Next.
This post originally appeared on Rick Hess Straight Up.