From Evidence-based Programs to an Evidence-based System: Opportunities Under the Every Student Succeeds Act

Coverage of the recent enactment of the Every Student Succeeds Act (ESSA), a major rewrite of the much-maligned No Child Left Behind Act (NCLB), has rightly focused on Congress’s decision to give states greater control over key issues such as the design of school accountability systems and the certification and evaluation of teachers. How states handle their newfound authority will directly affect student and teacher experiences and determine whether ESSA turns out to be an enduring shift in education governance or merely a temporary reversal of the decades-long trend toward greater federal control.

As important over the long run, however, may be a series of provisions in the law encouraging the use of evidence to inform the kinds of decisions states are now empowered to make. ESSA is the first federal education law to define the term “evidence-based” and to distinguish between activities with “strong,” “moderate,” and “promising” support based on the strength of existing research. Crucially, for many purposes the law also treats as evidence-based a fourth category comprising activities that have a research-based rationale but lack direct empirical support—provided, that is, that they are accompanied by “ongoing efforts to examine the effects” of the activity on important student outcomes. Those six words, if taken seriously and implemented with care, hold the potential to create and provide resources to sustain a new model for decision-making within state education agencies and school districts—a model that benefits students and taxpayers and, over time, enhances our knowledge of what works in education.

To understand the value of the law’s approach to evidence, it is useful to contrast ESSA with its predecessor. NCLB also sought to make the American education system more data-driven, famously using the term “scientifically based research” some 110 times in an attempt to limit the use of federal funds to activities with proven results. [i] It also defined scientifically based research narrowly, emphasizing the need for experimental or quasi-experimental studies (and expressing a clear preference for the former). The problem with this approach was that, in many areas, there simply weren’t any studies that met the law’s criteria. Education researchers had for decades largely ignored the emergence of more rigorous methods for program evaluation and, as a result, the evidentiary cupboard was bare. In some key areas, such as early reading instruction, NCLB did encourage the adoption of programs with a proven track record. It may also have played a constructive role by highlighting the lack of evidence concerning key questions of education policy and practice. As a practical matter, however, its evidence requirements became mere words on a page.

Fifteen years later, our nation’s education research infrastructure is much improved. While investment in education research and development still pales in comparison to other sectors, the Institute of Education Sciences has used its limited resources to push the field decisively toward the routine use of experimental methods to study program effectiveness. Federal investment in the creation of state longitudinal data systems that track students’ achievement over time has cut the cost of conducting rigorous program evaluations. The Obama Administration’s Investing in Innovation program set an important precedent by using a tiered model to align the amount of funding a grantee received to the strength of the evidence to support its effectiveness and requiring that its work be subjected to independent evaluation. The federally funded What Works Clearinghouse reviews and compiles the available studies on a range of topics to support state and local decision-makers.

Even so, the search remains on for proven strategies to address many of our most pressing education challenges. According to the Coalition for Evidence-Based Policy, the vast majority of the education experiments that have been conducted over the past decade have yielded null effects on student outcomes. This should be no surprise. As in the pharmaceutical industry, most of the new ideas tested in education at any point in time are unlikely to work. What is needed is a way to identify the small subset of ideas that actually do.

This is what makes ESSA’s definition of what it means for an activity to be evidence-based potentially so powerful. Consistent with existing Department of Education standards, the law defines as “strong” evidence showing a statistically significant effect on student outcomes from at least one experimental study. The terms “moderate” and “promising” require, respectively, evidence from a quasi-experimental study or a correlational study that makes statistical corrections for selection bias. When using federal funds to pay for interventions in low-performing schools, the law requires states and school districts to include activities that meet at least the promising standard. [ii] In order to have their accountability plans approved, states will need to demonstrate to the Department’s satisfaction that they have adhered to this requirement.

Everywhere else the law is more flexible, encouraging states and school districts to adopt “evidence-based” programs under numerous funding streams but permitting them to do so by subjecting novel programs to “ongoing” evaluation. These provisions do not require that funds be spent on evidence-based activities. In each case, doing so is simply listed as an allowable use of funds allocated for a particular purpose, such as improving teacher quality, engaging families, or meeting the needs of English language learners. But the clear implication is that states may use a portion of their federal funds to pay for the ongoing evaluation of untested programs; otherwise the evaluation activities would effectively constitute an unfunded mandate.

To eliminate any uncertainty, the Department of Education should issue guidance to clarify that federal funds can be used to support evaluation activities under any program within the law that provides states the “evidence-based” option. One benchmark could come from the Investing in Innovation program, under which many grantees have devoted as much as 20 percent of their awards to required independent evaluations. Such a step would dramatically increase the resources available for evaluation activities across the education sector. The Department should also specify that, if federal funds are used, those evaluations must be sufficiently strong to reach the “promising” standard and provide technical assistance to states and school districts in meeting that goal. Clearly Congress did not intend for the fourth category within its definition to serve as a permanent justification for a program to be considered evidence-based, but rather hoped to increase the supply of programs with empirical support.

To be clear, with the exception of interventions in low-performing schools, ESSA’s language on evidence-based activities is permissive rather than binding. The opportunity to use federal funds for evaluation purposes will only make a difference if state officials choose to exploit it. While some may have preferred a more heavy-handed approach, the NCLB era revealed that although the federal government can make states and school districts do something, it is hard to ensure that they do it well. This same logic surely applies to research and evaluation, and many states lack either the appetite or the capacity to engage in evidence-based policymaking.

Fortunately, there are steps that entities beyond the Department of Education can take to encourage states to take advantage of the opportunities the law provides to become more evidence-based systems. The law itself directs the Regional Educational Laboratories to provide technical assistance to states engaged in evidence-based activities, a responsibility that should become their top priority. Foundations could invest in developing states’ analytic capacity and condition their other philanthropy on the routine evaluation of new and existing programs. The National Governors Association and Council of Chief State School Officers could lead the development of common standards for evidence use within state school systems and encourage their implementation. An incoming presidential administration could even seek congressional approval for competitions to reward states and school districts for collecting, using, and disseminating evidence.

As with so much in the new law, what happens as a result of ESSA’s evidence provisions will depend less on what they require of states and more on what states make of the opportunities that they create. Let’s hope—and work to ensure—that states take full advantage.

—Martin R. West

This post originally appeared as part of the Evidence Speaks series at Brookings.


Notes:
[i] Manna, P. & Petrilli, M. J. (2008). “Double Standard? ‘Scientifically Based Research’ and the No Child Left Behind Act.” In F. M. Hess, ed. When Research Matters: How Scholarship Influences Education Policy. Harvard Education Press.
[ii] The Department is also required to give priority to applicants with strong, moderate, or promising evidence within seven competitive grant programs with combined funding of $832.5 million in fiscal year 2017.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College