Time for School?

When the snow falls, test scores also drop

20101_52_openStudents in the United States spend much less time in school than do students in most other industrialized nations, and the school year has been essentially unchanged for more than a century. This is not to say that there is no interest in extending the school year. While there has been little solid evidence that doing so will improve learning outcomes, the idea is often endorsed. U.S. Secretary of Education Arne Duncan has made clear his view that “our school day is too short, our week is too short, our year is too short.”

Researchers have recently begun to learn more about the effects of time spent on learning from natural experiments around the country. This new body of evidence, to which we have separately contributed, suggests that extending time in school would in fact likely raise student achievement. Below we review past research on this issue and then describe the new evidence and the additional insights it provides into the wisdom of increasing instructional time for American students.

We also discuss the importance of recognizing the role of instructional time, explicitly, in accountability systems. Whether or not policymakers change the length of the school year for the average American student, differences in instructional time can and do affect school performance as measured by No Child Left Behind. Ignoring this fact results in less-informative accountability systems and lost opportunities for improving learning outcomes.

Emerging Evidence

More than a century ago, William T. Harris in his 1894 Report of the Commissioner [of the U.S. Bureau of Education] lamented,

The boy of today must attend school 11.1 years in order to receive as much instruction, quantitatively, as the boy of fifty years ago received in 8 years…. It is scarcely necessary to look further than this for the explanation for the greater amount of work accomplished…in the German and French than in the American schools.

The National Education Commission on Time and Learning would echo his complaint one hundred years later. But the research summary issued by that same commission in 1994 included not one study on the impact of additional instruction on learning. Researchers at that time simply had little direct evidence to offer.

The general problem researchers confront here is that length of the school year is a choice variable. Because longer school years require greater resources, comparing a district with a long school year to one with a shorter year historically often amounted to comparing a rich school district to a poor one, thereby introducing many confounding factors. A further problem in the American context is that there is little recent variation in the length of school year. Nationwide, districts generally adhere to (and seldom exceed) a school calendar of 180 instructional days. And while there was some variation in the first half of the 20th century, other policies and practices changed simultaneously, making it difficult to uncover the separate effect of changes in instructional time.

Among the first researchers to try to identify the impact of variation in instructional time were economists studying the effect of schooling on labor market outcomes such as earnings. Robert Margo in 1994 found evidence suggesting that historical differences in school-year length accounted for a large fraction of differences in earnings between black workers and white workers.

Using differences in the length of the school year across countries, researchers Jong-Wha Lee and Robert Barro reported in 2001 that more time in school improves math and science test scores. Oddly, though, their results also suggested that it lowers reading scores. In 2007, Ozkan Eren and Daniel Millimet examined the limited variation that does exist across American states and found weak evidence that longer school years improve math and reading test scores.

Work we conducted separately in 2007 and 2008 provides much stronger evidence of effects on test scores from year-to-year changes in the length of the school year due to bad weather. In a nutshell, we compared how specific Maryland and Colorado schools fared on state assessments in years when there were frequent cancellations due to snowfall to the performance of the very same schools in relatively mild winters. Because the severity of winter weather is inarguably outside the control of schools, this research design addresses the concern that schools with longer school years differ from those with shorter years (see research design sidebar).

Research Design

Our studies use variation from one year to the next in snow or the number of instructional days cancelled due to bad weather to explain changes in each school’s test scores over time. We also take into account changing characteristics of schools and students, as well as trends in performance over time. The advantage of this approach is that weather is obviously outside the control of school districts and thereby provides a source of variation in instructional time that should be otherwise unrelated to school performance. Furthermore, Maryland and Colorado are ideal states in which to study weather-related cancellations. In addition to having large year-to-year fluctuations in snowfall, annual snowfall in both states typically varies widely across In Maryland and Colorado, some districts are exposed to much greater variation in the severity of their winters than others, which allows us to use the remaining districts to control for common trends shared by all districts in the state. Further, because we have data from many years, we can compare students in years with many weather-related cancellations to students in the same school in previous or subsequent years with fewer cancellations. Although cancellations are eventually made up, tests are administered in the spring in both states. This is months before the makeup days held prior to summer break.

In Marcotte (2007) and Hansen (2008), we estimate that each additional inch of snow in a winter reduced the percentage of 3rd-, 5th-, and 8th-grade students who passed math assessments by between one-half and seven-tenths of a percentage point, or just under 0.0025 standard deviations. To put that seemingly small impact in context, Marcotte reports that in winters with average levels of snowfall (about 17 inches) the share of students testing proficient is about 1 to 2 percentage points lower than in winters with little to no snow. Hansen reports comparable impacts from additional days with more than four inches of snow on 8th-grade students’ performance on math tests in Colorado.

Marcotte and Steven Hemelt (2008) collected data on school closures from all but one school district in Maryland to estimate the impact on achievement. The percentage of students passing math assessments fell by about one-third to one-half a percentage point for each day school was closed, with the effect largest for students in lower grades. Hansen (2008) found effects in Maryland that are nearly identical to those reported by Marcotte and Hemelt, and larger, though statistically insignificant, results in Colorado. Hansen also took advantage of a different source of variation in instructional time in Minnesota. Utilizing the fact that the Minnesota Department of Education moved the date for its assessments each year for six years, Hansen estimated that the percentage of 3rd- and 5th-grade students with proficient scores on the math assessment increased by one-third to one-half of a percentage point for each additional day of schooling.

While our studies use data from different states and years, and employ somewhat different statistical methods, they yield very similar results on the value of additional instructional days for student performance. We estimate that an additional 10 days of instruction results in an increase in student performance on state math assessments of just under 0.2 standard deviations. To put that in perspective, the percentage of students passing math assessments falls by about one-third to one-half a percentage point for each day school is closed.

Other researchers have examined impacts of instructional time on learning outcomes in other states, with similar results. For example, University of Virginia researcher Sarah Hastedt has shown that closures that eliminated 10 school days reduced math and reading performance on the Virginia Standards of Learning exams by 0.2 standard deviations, the same magnitude we estimate for the neighboring state of Maryland. Economist David Sims of Brigham Young University in 2008 took advantage of a 2001 law change in Wisconsin that required all school districts in that state to start after September 1. Because some districts were affected while others were not, he was also able to provide unusually convincing evidence on the effect of changes in the number of instructional days. He found additional instruction days to be associated with increased scores in math for 4th-grade students, though not in reading.

Collectively, this emerging body of research suggests that expanding instructional time is as effective as other commonly discussed educational interventions intended to boost learning. Figure 1 compares the magnitude of the effect of instructional days on standardized math scores to estimates drawn from other high-quality studies of the impact of changing class size, teacher quality, and retaining students in grade. The effect of additional instructional days is quite similar to that of increasing teacher quality and reducing class size. The impact of grade retention is comparable, too, though that intervention is pertinent only for low-achieving students.

20101_52_fig1

Although the evidence is mounting that expanding instructional time will result in real learning gains, evidence on the costs of extending the school year is much scarcer and involves a good deal of conjecture. Perhaps the best evidence comes from a recent study in Minnesota, which estimated that increasing the number of instructional days from 175 to 200 would cost close to $1,000 per student, in a state where the median per-pupil expenditure is about $9,000. The total annual cost was estimated at $750 million, an expense that proved politically and financially infeasible when the proposal was recently considered in that state. Comparing costs of expanding instructional days with the costs of other policy interventions will be an analytic and policy exercise of real importance if the call for expanded instructional time is to result in real change.

Complicating this analytic task are differences in costs that exist across schools and states. Utilities, transportation, and teacher summer-labor markets vary widely across geographic areas, and all affect the cost of extending the school year. So, while the benefits of extending the school year may exceed the costs in some states or school districts, they may not in others. A further complication is the possibility of diminishing returns to additional instructional time. Our research has studied the effect of additional instructional days prior to testing, typically after approximately 120 school days. The effect of extending instructional time into the summer is unknown. Also, our research has focused on the variation in instructional days prior to exams, or accountable days. The effect of adding days after exams could be quite different.

Costs of extending school years are as much political as economic. Teachers have come to expect time off in the summer and have been among the most vocal opponents of extending school years in several locations. Additional compensation could likely overcome this obstacle, but how much is an unresolved and difficult question.

Teachers are not the only ones who have grown accustomed to a summer lasting from June through August. Students and families have camps, vacations, and work schedules set up around summer vacation. “Save Our Summers” movements have for years decried the benefits of additional instructional days and proclaimed the benefits of summer vacation, and the movements have grown as states have considered extending the school year and individual school districts have moved up their start dates. Longer school years might reduce tourism and its accompanying tax revenue. These additional costs likely vary by state and district, but are clearly part of the analytic and political calculus.

Time and Accountability

As education policymakers consider lengthening the school year and face trade-offs and uncertainties, it is important to recognize that expanding instructional time offers both opportunities and hazards for another reform that is well established, the accountability movement. Educators, policymakers, parents, and economists are sure to agree that if students in one school learn content in half the time it takes comparable students at another school to learn the same content, the first school is doing a better job. How students would rank these schools is equally obvious. Yet state and federal accountability systems do not account for the time students actually spent in school when measuring gains, and so far have no way of determining how efficiently schools educate their students.

One implication of this oversight is that accountability systems are ignoring information relevant to understanding schools’ performance. Year-to-year improvements in the share of students performing well on state assessments can be accomplished by changes in school practices, or by increases in students’ exposure to school. Depending on the financial or political costs of extending school years, those with a stake in education might think differently about gains attributable to the quality of instruction provided and gains attributable to the quantity.

To see how the contributions of these inputs might be separated, consider data from Minnesota. Between 2002 and 2005, 3rd graders in that state exhibited substantial improvements in performance on math assessments, a fact clearly reflected by Minnesota’s accountability system. But during that period, there was substantial year-to-year variation in the number of instructional days students had prior to the test date. In Figure 2, we plot both the reported test scores for Minnesota 3rd graders (the solid line) and the number of days of instruction those students received (the bars). Useful, and readily calculated, is the time series of test scores, adjusting for differences in the number of instructional days (the dotted line).

20101_52_fig2

Comparing the reported and adjusted scores is useful for at least two reasons. First, it illustrates the role of time as a component of test gains. Overall, scale scores increased by 0.4 standard deviations from 2001–02 to 2004–05. Of this increase, a large portion was attributable to expansion in instructional time prior to the test date. Adjusting for the effect of instructional days, we estimate that scores increased by roughly 0.25 standard deviations, nearly 40 percent less than the reported gains.

Second, the comparatively steady gain in adjusted scores over the period provides evidence of improvements in instructional quality, independent of changes in the amount of time students were in class. The fast year-to-year increases in the first and last periods result in large part from increases in the amount of time in school, while the negligible change in overall scores between 2003 and 2004 does not pick up real gains made despite a shortened school year. Adjusted scores pick up increases in learning gains attributable to how schools used instructional time, such as through changing personnel, curricula, or leadership. The point here is that time-adjusted scores provide information that is just as important as the overall reported scores for understanding school improvements. A robust accountability system would recognize that more instructional time can be used to meet goals, but that more time is neither a perfect substitute for, nor the same thing as, better use of time.

The Hazards of Ignoring Time

Failing to account for the role of time in student learning not only means missed opportunity, it also creates potential problems. First, it can allow districts to game accountability systems by rearranging school calendars so that students have more time in school prior to the exam, even as the overall length of the school year remains constant. Beginning in the 1990s, districts in a number of states began moving start dates earlier, with many starting just after the first of August. The question arose whether these changes might be linked to pressures on districts to improve performance on state assessments. David Sims showed that Wisconsin schools with low test scores in one year acted strategically by starting the next school year a bit earlier to raise scores. Evidence of gaming soon emerged in other states as well. Wisconsin passed its 2001 law requiring schools to begin after September 1 to prevent such gaming; similar laws were recently passed in Texas and Florida.

The motives driving earlier start dates could spill over into other instructional policies. Minnesota moved its testing regimen from February to April in the wake of accountability standards, while Colorado legislators have proposed moving their testing window from March into April, with advocates suggesting that the increased time for instruction would make meeting performance requirements under No Child Left Behind more feasible for struggling schools. While administering the test later in the year has potential benefits in measured performance, grading the tests over a shorter time frame costs more, estimated at some $3.9 million annually in Colorado. Schools thus sacrifice educational inputs (such as smaller classes or higher teacher salaries) to pay for the later test date.

A second hazard involves fairness to schools at risk of being sanctioned for poor performance: these schools can face longer odds if weather or other schedule disruptions limit school days. The impact of instructional time on learning means that one factor determining the ability of schools to meet performance goals is not under the control of administrators and teachers. We illustrate the effects of time on making adequate yearly progress (AYP) as defined by No Child Left Behind by comparing the performance of Maryland schools the law identified as underperforming to estimates of what the performance would have been had the schools been given a few more days for instruction.

We begin with data from all elementary schools in Maryland that did not make AYP in math and reading during the 2002–03 to 2004–05 school years. We adjust actual performance by the number of days lost in a given year multiplied by the marginal effect of an additional day on test performance as reported in Marcotte and Hemelt’s study of Maryland schools. This allows us to estimate what the proficiency rates in each subject would have been had those schools been open for all scheduled instructional days prior to the assessment. We then compare the predicted proficiency rate to the AYP threshold.

We summarize the results of this exercise in Figure 3. The light bars represent the number of schools failing to make AYP in math and reading in various years. The dark bars are the number of those schools that we predict would have failed to make AYP if the schools had been able to meet on all scheduled days. We make these estimates assuming that low-performing schools would have made average gains with each additional day of instruction.

The average number of days lost to unscheduled school closings varied substantially over the period, from more than 10 to fewer than four and a half. Many schools that did not make AYP likely would have had they not lost so many school days. For example, we estimate that 35 of the 56 elementary schools that did not make AYP in math in 2002–03 would have met the AYP criterion if they had been open during all scheduled school days. Even if these schools were only half as productive as the typical school, 24 of the 56 flagged schools would likely have made AYP if they had been open for all scheduled days.

There is, however, a way to reduce risks like these for schools and to limit incentives for administrators to move start or test dates at the same time: that is to recognize and report time as an input in education. A simple and transparent way to do this is for state report cards, which inform parents about school outcomes and summarize the information on AYP status, to include information about the number of instructional days at test date as well as the total number of instructional days for the year. This information is readily available and already monitored by schools, districts, and states. Local and state education authorities could use it when assessing performance, for example, in hearing an appeal from a school that failed to meet its AYP goals. Further, this information could be used to estimate test scores adjusted for instructional days, to be used alongside unadjusted changes in performance. Distinguishing between gains due to expanded instruction time and better use of that time can enrich accountability systems and provide more and better information to analysts and the public alike.

Looking Ahead

There can be no doubt that expanding the amount of time American students spend in school is an idea popular with many education policymakers and has long been so. What makes the present different is that we now have solid evidence that anticipated improvements in learning will materialize.

Practical obstacles to the extension of the school year include substantial expense and stakeholder attachment to the current school year and summer schedule. The benefits of additional instructional days could diminish as school years are lengthened. Further, it is unknown how teachers would use additional instructional days if they are provided after annual testing is already finished. Simply extending the year well after assessments are given might mean that students and teachers spend more days filling (or killing) time before the end of the year. This would make improvements in learning unlikely, and presumably make students unhappy for no good reason.

Though the issue has seen little movement in the past and faces real opposition going forward, the policy climate appears likely to be favorable once the fiscal challenges now facing public school systems recede. It is our hope that policymakers and administrators who try to take advantage of this window of opportunity don’t harm reforms that have succeeded in improving learning outcomes and don’t implement reforms in a manner that would fail to do the same. Advocates for extended school years have so far said virtually nothing about whether or how accountability systems should accommodate longer school years.

Across the country, a small number of schools and districts are modifying or extending the academic year. The Massachusetts 2020 initiative has provided resources for several dozen schools to increase the number of instructional days they offer from 180 to about 200. Other examples include low-performing schools that have lengthened their school day in an effort to improve, and the longer school days, weeks, and years in some charter schools. However, such initiatives remain rare, with no systemic change in the instructional time provided to American students. Our work confirms that increasing instructional time could have large positive effects on learning gains. Encouraging schools and districts to view the school calendar as a tool in the effort to improve learning outcomes should be encouraged in both word and policy.

Dave E. Marcotte is professor of public policy at the University of Maryland, Baltimore County. Benjamin Hansen is a research associate at IMPAQ International, LLC.

For more on this topic, please read “Do Schools Begin Too Early: The effect of start times on student achievement

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College