We are big fans of using evidence to improve education. That’s why we wrote a book on it. But when it comes to promoting more and better evidence use in the field, more requirements are not necessarily better.
The U.S. Department of Education is updating the Education Department General Administrative Regulations (EDGAR), which govern how the agency awards and manages grants. While most of the proposed changes are housekeeping, ED is taking this opportunity to tuck in some substantive changes to evidence rules. Among other shifts, they would make the evidence tiers defined in the Every Student Succeeds Act applicable to all ED elementary and secondary programs, including competitive grants programs, where proposals backed by the kinds of evidence the law prefers could receive preferential treatment.
ESSA was the first federal education law to define the term “evidence-based” and to distinguish between activities based on the strength of the research to them. The law requires school districts to include activities that meet at least the “promising” tier when using federal funds to intervene in struggling schools. It also requires the department to prioritize proposals backed by evidence when awarding funds through seven competitive grant programs. ED’s proposed regulations would extend this logic across the agency’s full portfolio, giving the Department the option of prioritizing such proposals in any of its competitive grant programs.
The problem is that to meet the spirit, not just the letter, of the evidence requirements, state and district staff would need more time and expertise than they actually have. One of us, Carrie, served as a research director in a state education agency for over a decade and had the opportunity to observe educators’ skill using evidence up close. Virtually every educator she worked with wanted to use evidence to inform their work but had little knowledge of how.
The skills required to use evidence well are in short supply, and districts and states vary tremendously in their capacity. Some agencies have invested deeply in research staff and external partnerships. Others have the capacity to meet the requirements if they are understood as a narrow exercise in compliance: filling in the blanks of a logic model or finding one study to support their proposed work. But even this box-checking type of evidence work takes some time and expertise, and many more agencies struggle even to get this far. This means that increasing evidence requirements would magnify existing disparities in agency resources through inequitable access to competitive federal funds.
Some might argue that we could solve this problem of low and variable capacity through strong policy guidance. Since ESSA was passed in 2015, ED has attempted to do just that through its guidance on the law’s evidence requirements. That guidance is nuanced and thoughtful, yet districts and states still struggle to implement the requirements. Why would broadening the requirements to all federal grants result in a better outcome? No policy guidance, however well crafted, is going to make the proposed evidence regulations work in practice. That’s because no policy guidance can provide the real missing ingredients: time and training.
Even if we could solve all these implementation problems, practitioners would still be left to confront an evidence base in education that just isn’t built for the type of decisions they need to make. These decisions are much more granular than the typical research question: How much instructional time should I allocate to each standard in a given grade level and subject? To catch up students who are behind grade level, should I invest in double-dose math classes or intensive vacation week courses? How should I structure my teacher coaching program so that teachers get the maximum benefit?
While research can provide some insight, studies are rarely designed in a way to answer these types of questions convincingly. As a result, the evidence base on issues of concern to practitioners is thin in many areas, particularly at the higher tiers of rigor, and the settings where the available studies were conducted may limit the findings’ relevance to other places. When practitioners write grant proposals for programs regulated by EDGAR, they may be swayed by the new evidence requirements to propose projects that focus on a less relevant problem of practice but are backed by strong research. But they might have gotten better educational results by proposing a less-studied approach that addressed a more critical need or was more likely to work in their local context.
The proposed changes to EDGAR are as well intentioned as their result is predictable: more paperwork, less-equitable outcomes, and little to no increase in authentic evidence use in education settings. Fortunately, there it still time to convince the Department to change course: Public comment on the proposal is due on February 26. We have weighed in with our concerns and would encourage others in the education research and practice communities to do the same.
If we want practitioners to do evidence work well, we need to focus more on support and less on compliance. Leave EDGAR out of it.
Carrie Conaway is senior lecturer at the Harvard Graduate School of Education. Nora Gordon is professor of public policy at Georgetown University. They are the co-authors of Common-Sense Evidence: The Education Leaders’ Guide to Using Data and Research (Harvard Education Press 2020).