Is the U.S. Catching Up?

Education Next Issue Cover

International and state trends in student achievement



By , and

20 Comments | Print | PDF |

FALL 2012 / VOL. 12, NO. 4

Read the unabridged version of this report here.
Find an interactive map of the states’ annual gains here.


“The United States’ failure to educate its students leaves them unprepared to compete and threatens the country’s ability to thrive in a global economy.” Such was the dire warning issued recently by an education task force sponsored by the Council on Foreign Relations. Chaired by former New York City schools chancellor Joel I. Klein and former U.S. secretary of state Condoleezza Rice, the task force said the country “will not be able to keep pace—much less lead—globally unless it moves to fix the problems it has allowed to fester for too long.” Along much the same lines, President Barack Obama, in his 2011 State of the Union address, declared, “We need to out-innovate, out-educate, and out-build the rest of the world.”

Although these proclamations are only the latest in a long series of exhortations to restore America’s school system to a leading position in the world, the U.S. position remains problematic. In a report issued in 2010, we found only 6 percent of U.S. students performing at the advanced level in mathematics, a percentage lower than those attained by 30 other countries. And the problem isn’t limited to top-performing students. In 2011, we showed that just 32 percent of 8th graders in the United States were proficient in mathematics, placing the U.S. 32nd when ranked among the participating international jurisdictions (see “Are U.S. Students Ready to Compete?features, Fall 2011).

Admittedly, American governments at every level have taken actions that would seem to be highly promising. Federal, state, and local governments spent 35 percent more per pupil—in real-dollar terms—in 2009 than they had in 1990. States began holding schools accountable for student performance in the 1990s, and the federal government developed its own nationwide school-accountability program in 2002.

And, in fact, U.S. students in elementary school do seem to be performing considerably better than they were a couple of decades ago. Most notably, the performance of 4th-grade students on math tests rose steeply between the mid-1990s and 2011. Perhaps, then, after a half century of concern and efforts, the United States may finally be taking the steps needed to catch up.

To find out whether the United States is narrowing the international education gap, we provide in this report estimates of learning gains over the period between 1995 and 2009 for 49 countries from most of the developed and some of the newly developing parts of the world. We also examine changes in student performance in 41 states within the United States, allowing us to compare these states with each other as well as with the 48 other countries.

Data and Analytic Approach

Data availability varies from one international jurisdiction to another, but for many countries enough information is available to provide estimates of change for the 14-year period between 1995 and 2009. For 41 U.S. states, one can estimate the improvement trend for a 19-year period—from 1992 to 2011. Those time frames are extensive enough to provide a reasonable estimate of the pace at which student test-score performance is improving in countries across the globe and within the United States. To facilitate a comparison between the United States as a whole and other nations, the aggregate U.S. trend is estimated for that 14-year period and each U.S. test is weighted to take into account the specific years that international tests were administered. (Because of the difference in length and because international tests are not administered in exactly the same years as the NAEP tests, the results for each state are not perfectly calibrated to the international tests, and each state appears to be doing slightly better internationally than would be the case if the calibration were exact. The differences are marginal, however, and the comparative ranking of states is not affected by this discrepancy.)

Our findings come from assessments of performance in math, science, and reading of representative samples in particular political jurisdictions of students who at the time of testing were in 4th or 8th grade or were roughly ages 9‒10 or 14‒15. The political jurisdictions may be nations or states. The data come from one series of U.S. tests and three series of tests administered by international organizations. Using the equating method described in the methodology sidebar, it is possible to link states’ performance on the U.S. tests to countries’ performance on the international tests, because representative samples of U.S. students have taken all four series of tests.

Comparisons across Countries

In absolute terms, the performance of U.S. students in 4th and 8th grade on the NAEP in math, reading, and science improved noticeably between 1995 and 2009. Using information from all administrations of NAEP tests to students in all three subjects over this time period, we observe that student achievement in the United States is estimated to have increased by 1.6 percent of a standard deviation per year, on average. Over the 14 years, these gains equate to 22 percent of a standard deviation. When interpreted in years of schooling, these gains are notable. On most measures of student performance, student growth is typically about 1 full standard deviation on standardized tests between 4th and 8th grade, or about 25 percent of a standard deviation from one grade to the next. Taking that as the benchmark, we can say that the rate of gain over the 14 years has been just short of the equivalent of one additional year’s worth of learning among students in their middle years of schooling.

Yet when compared to gains made by students in other countries, progress within the United States is middling, not stellar (see Figure 1). While 24 countries trail the U.S. rate of improvement, another 24 countries appear to be improving at a faster rate. Nor is U.S. progress sufficiently rapid to allow it to catch up with the leaders of the industrialized world.

Students in three countries—Latvia, Chile, and Brazil—improved at an annual rate of 4 percent of a standard deviation, and students in another eight countries—Portugal, Hong Kong, Germany, Poland, Liechtenstein, Slovenia, Colombia, and Lithuania—were making gains at twice the rate of students in the United States. By the previous rule of thumb, gains made by students in these 11 countries are estimated to be at least two years’ worth of learning. Another 13 countries also appeared to be doing better than the U.S., although the differences between the average improvements of their students and those of U.S. students are marginal.

Student performance in nine countries declined over the same 14-year time period. Test-score declines were registered in Sweden, Bulgaria, Thailand, the Slovak and Czech Republics, Romania, Norway, Ireland, and France. The remaining 15 countries were showing rates of improvement that were somewhat slower than those of the United States.

In sum, the gains posted by the United States in recent years are hardly remarkable by world standards. Although the U.S. is not among the 9 countries that were losing ground over this period of time, 11 other countries were moving forward at better than twice the pace of the United States, and all the other participating countries were changing at a rate similar enough to the United States to be within a range too close to be identified as clearly different.

Which States Are the Big Gainers?

Progress was far from uniform across the United States. Indeed, the variation across states was about as large as the variation among the countries of the world. Maryland won the gold medal by having the steepest overall growth trend. Coming close behind, Florida won the silver medal and Delaware the bronze. The other seven states that rank among the top-10 improvers, all of which outpaced the United States as a whole, are Massachusetts, Louisiana, South Carolina, New Jersey, Kentucky, Arkansas, and Virginia. See Figure 2 for an ordering of the 41 states by rate of improvement.

Iowa shows the slowest rate of improvement. The other four states whose gains were clearly less than those of the United States as a whole are Maine, Oklahoma, Wisconsin, and Nebraska. Note, however, that because of nonparticipation in the early NAEP assessments, we cannot estimate an improvement trend for the 1992‒2011 time period for nine states—Alaska, Illinois, Kansas, Montana, Nevada, Oregon, South Dakota, Vermont, and Washington.

Cumulative growth rates vary widely. Average student gains over the 19-year period in Maryland, Florida, Delaware, and Massachusetts, with annual growth rates of 3.1 to 3.3 percent of a standard deviation, were some 59 percent to 63 percent of a standard deviation over the time period, or better than two years of learning. Meanwhile, annual gains in the states with the weakest growth rates—Iowa, Maine, Oklahoma, and Wisconsin—varied between 0.7 percent and 1.0 percent of a standard deviation, which translate over the 19-year period into learning gains of one-half to three-quarters of a year. In other words, the states making the largest gains are improving at a rate two to three times the rate in states with the smallest gains.

Had all students throughout the United States made the same average gains as did those in the four leading states, the U.S. would have been making progress roughly comparable to the rate of improvement in Germany and the United Kingdom, bringing the United States reasonably close to the top-performing countries in the world.

Is the South Rising Again?

Some regional concentration is evident within the United States. Five of the top-10 states were in the South, while no southern states were among the 18 with the slowest growth. The strong showing of the South may be related to energetic political efforts to enhance school quality in that region. During the 1990s, governors of several southern states—Tennessee, North Carolina, Florida, Texas, and Arkansas—provided much of the national leadership for the school accountability effort, as there was a widespread sentiment in the wake of the civil rights movement that steps had to be taken to equalize educational opportunity across racial groups. The results of our study suggest those efforts were at least partially successful.

Meanwhile, students in Wisconsin, Michigan, Minnesota, and Indiana were among those making the fewest average gains between 1992 and 2011. Once again, the larger political climate may have affected the progress on the ground. Unlike in the South, the reform movement has made little headway within midwestern states, at least until very recently. Many of the midwestern states had proud education histories symbolized by internationally acclaimed land-grant universities, which have become the pride of East Lansing, Michigan; Madison, Wisconsin; St. Paul, Minnesota; and Lafayette, Indiana. Satisfaction with past accomplishments may have dampened interest in the school reform agenda sweeping through southern, border, and some western states.

Are Gains Simply Catch-ups?

According to a perspective we shall label “catch-up theory,” growth in student performance is easier for those political jurisdictions originally performing at a low level than for those originally performing at higher levels. Lower-performing systems may be able to copy existing approaches at lower cost than higher-performing systems can innovate. This would lead to a convergence in performance over time. An opposing perspective—which we shall label “building-on-strength theory”—posits that high-performing school systems find it relatively easy to build on their past achievements, while low-performing systems may struggle to acquire the human capital needed to improve. If that is generally the case, then the education gap among nations and among states should steadily widen over time.

Neither theory seems able to predict the international test-score changes that we have observed, as nations with rapid gains can be identified among countries that had high initial scores and countries that had low ones. Latvia, Chile, and Brazil, for example—were relatively low-ranking countries in 1995 that made rapid gains, a pattern that supports catch-up theory. But consistent with building-on-strength theory, a number of countries that have advanced relatively rapidly were already high-performing in 1995—Hong Kong and the United Kingdom, for example. Overall, there is no significant pattern between original performance and changes in performance across countries.

But if neither theory accounts for differences across countries, catch-up theory may help to explain variation among the U.S. states. The correlation between initial performance and rate of growth is a negative 0.58, which indicates that states with lower initial scores had larger gains. For example, students in Mississippi and Louisiana, originally among the lowest scoring, showed some of the most striking improvement.  Meanwhile, Iowa and Maine, two of the highest-performing entities in 1992, were among the laggards in subsequent years (see Figure 3). In other words, catch-up theory partially explains the pattern of change within the United States, probably because the barriers to the adoption of existing technologies are much lower within a single country than across national boundaries.

Catch-up theory nonetheless explains only about one-quarter of the total state variation in achievement growth. Notice in Figure 3 that some states are well below the line (e.g., Iowa and Maine) while others are well above  (e.g., Maryland and Massachusetts). Note also that Iowa, Maine, Wisconsin, and Nebraska rank well below that line. Closing the interstate gap does not happen automatically.

What about Spending Increases?

According to another popular theory, additional spending on education will yield gains in test scores. To see whether expenditure theory can account for the interstate variation, we plotted test-score gains against increments in spending between 1990 and 2009. As can be seen from the scattering of states into all parts of Figure 4, the data offer precious little support for the theory. Just about as many high-spending states showed relatively small gains as showed large ones. Maryland, Massachusetts, and New Jersey enjoyed substantial gains in student performance after committing substantial new fiscal resources. But other states with large spending increments—New York, Wyoming, and West Virginia, for example—had only marginal test-score gains to show for all that additional expenditure. And many states defied the theory by showing gains even when they did not commit much in the way of additional resources. It is true that on average, an additional $1000 in per-pupil spending is associated with an annual gain in achievement of one-tenth of 1 percent of a standard deviation. But that trivial amount is of no statistical or substantive significance. Overall, the 0.12 correlation between new expenditure and test-score gain is just barely positive.

Who Spends Incremental Funds Wisely?

Some states received more educational bang for their additional expenditure buck than others. To ascertain which states were receiving the most from their incremental dollars, we ranked states on a “points per added dollar” basis. Michigan, Indiana, Idaho, North Carolina, Colorado, and Florida made the most achievement gains for every incremental dollar spent over the past two decades. At the other end of the spectrum are the states that received little back in terms of improved test-score performance from increments in per-pupil expenditure—Maine, Wyoming, Iowa, New York, and Nebraska.

We do not know, however, which kinds of expenditures prove to be the most productive or whether there are other factors that could explain variation in productivity among the states.

Causes of Change

There is some hint that those parts of the United States that took school reform the most seriously—Florida and North Carolina, for example—have shown stronger rates of improvement, while states that have steadfastly resisted many school reforms (Iowa and Wisconsin, for instance), are among the nation’s test-score laggards. But the connection between reforms and gains adduced thus far is only anecdotal, not definitive. Although changes among states within the United States appear to be explained in part by catch-up theory, we cannot pinpoint the specific factors that underlie this. We are also unable to find significant evidence that increased school expenditure, by itself, makes much of a difference. Changes in test-score performance could be due to broader patterns of economic growth or varying rates of in-migration among states and countries. Of course, none of these propositions has been tested rigorously, so any conclusions regarding the sources of educational gains must remain speculative.

Have We Painted Too Rosy a Portrait?

Even the extent of the gains that have been made are uncertain. We have estimated gains of 1.6 percent of a standard deviation each year for the United States as a whole, or a total gain of 22 percent of a standard deviation over 14 years, a forward movement that has lifted performance by nearly a full year’s worth of learning over the entire time period. A similar rate of gain is estimated for students in the industrialized world as a whole (as measured by students residing in the 49 participating countries). Such a rate of improvement is plausible, given the increased wealth in the industrialized world and the higher percentages of educated parents than in prior generations.

However, it is possible to construct a gloomier picture of the rate of the actual progress that both the United States and the industrialized world as a whole have made. All estimations are normed against student performances on the National Assessment of Educational Progress in 4th and 8th grades in 2000.  Had we estimated gains from student performance in 8th grade only on the grounds that 4th-grade gains are meaningless unless they are observed for the same cohort four years later, our results would have shown annual gains in the United States of only 1 percent of a standard deviation. The relative ranking of the United States remains essentially unchanged, however, as the estimated growth rates for 8th graders in other countries is also lower than for estimates that include students in 4th grade (see the unabridged report, Appendix B, Figure B1).

A much reduced rate of progress for the United States emerges when we norm the trends on the PISA 2003 test rather than the 2000 NAEP test. In this case, we would have estimated annual growth rate for the United States of only one-half of 1 percent of a standard deviation. A lower annual growth rate for other countries would also have been estimated, and again the relative ranking of the United States would remain unchanged (see the unabridged report, Appendix B, Figure B2).

An even darker picture emerges if one turns to the results for U.S. students at age 17, for whom only minimal gains can be detected over the past two decades. We have not reported the results for 17-year-old students, because the test administered to them does not provide information on the performance of students within individual states, and no international comparisons are possible for this age group.

Students themselves and the United States as a whole benefit from improved performance in the early grades only if that translates into measurably higher skills at the end of school. The fact that none of the gains observed in earlier years translate into improved high-school performance leaves one to wonder whether high schools are effectively building on the gains achieved in earlier years. And while some scholars dismiss the results for 17-year-old students on the grounds that high-school students do not take the test seriously, others believe that the data indicate that the American high school has become a highly problematic educational institution. Amidst any uncertainties one fact remains clear, however: the measurable gains in achievement accomplished by more recent cohorts of students within the United States are being outstripped by gains made by students in about half of the other 48 participating countries.

Methodology

Our international results are based on 28 administrations of comparable math, science, and reading tests between 1995 and 2009 to juris­dictionally representative samples of students in 49 countries. Our state-by-state results come from 36 administrations of math, reading, and science tests between 1992 and 2011 to representative samples of students in 41 of the U.S. states. These tests are part of four ongoing series: 1) National Assessment of Educational Progress (NAEP), administered by the U. S. Department of Education; 2) Programme for International Student Assessment (PISA), administered by the Organisation for Economic Co-operation and Development (OECD); 3) Trends in International Mathematics and Science Study (TIMSS), adminis­tered by the International Associa­tion for the Evaluation of Educational Achievement (IEA); and 4) Progress in International Reading Literacy Study (PIRLS), also administered by IEA.

To equate the tests, we first express each testing cycle (of grade by subject) of the NAEP test in terms of standard deviations of the U.S. population on the 2000 wave. That is, we create a new scale benchmarked to U.S. performance in 2000, which is set to have a standard deviation of 100 and a mean of 500. All other NAEP results are a simple linear transformation of the NAEP scale on each testing cycle. Next, we express each international test on this trans­formed NAEP scale by performing a simple linear transformation of each international test based on the U.S. performance on the respective test. Specifically, we adjust both the mean and the standard deviation of each international test so that the U.S. performance on the tests is the same as the U.S. NAEP performance, as expressed on the transformed NAEP scale. This allows us to estimate trends on the international tests on a common scale, whose property is that in the year 2000 it has a mean of 500 and a standard deviation of 100 for the United States.

Expressed on this transformed scale, estimates of overall trends for each country are based on all avail­able data from all international tests administered between 1995 and 2009 for that country. Since a state or country may have specific strengths or weaknesses in certain subjects, at specific grade levels, or on particu­lar international testing series, our trend estimations use the following procedure to hold such differences constant. For each state and country, we regress the available test scores on a year variable, indicators for the international testing series (PISA, TIMSS, PIRLS), a grade indicator (4th vs. 8th grade), and subject indicators (mathematics, reading, science). This way, only the trends within each of these domains are used to estimate the overall time trend of the state or country, which is captured by the coef­ficient on the year variable.

A country’s performance on any given test cycle (for example, PIRLS 4th-grade reading, TIMSS 8th-grade math) is only considered if the country participated at least twice within that respective cycle. To be included in the analysis, the time span between a country’s first and last participation in any international test must be at least seven years. A country must have participated prior to 2003 and more recently than 2006. Finally, for a coun­try to be included there must be at least nine test observations available.

For the analysis of U.S. states, observations are available for only 41 states. The remaining states did not participate in NAEP tests until 2002. As mentioned, annual gains for states are calculated for a 19-year period (1992 to 2011), the longest interval that could be observed for the 41 states. International comparisons are for a 14-year period (1995 to 2009), the longest time span that could be observed with an adequate number of international tests. To facilitate a comparison between the United States as a whole and other nations, the aggregate U.S. trend is estimated from that same 14-year period and each U.S. test is weighted to take into account the specific years that international tests were administered. Because of the difference in length and because international tests are not administered in exactly the same years as the NAEP tests, the results for each state are not perfectly calibrated to the international tests, and each state appears to be doing slightly better internationally than would be the case if the calibration were exact. The differences are mar­ginal, however, and the comparative ranking of states is not affected by this discrepancy.

A more complete description of the methodology is available in the unabridged version of this report.

Politics and Results

The failure of the United States to close the international test-score gap, despite assiduous public assertions that every effort would be undertaken to produce that objective, raises questions about the nation’s overall reform strategy. Education goal setting in the United States has often been  utopian rather than realistic. In 1990, the president and the nation’s governors announced the goal that all American students should graduate from high school, but two decades later only 75 percent of 9th graders received their diploma within four years after entering high school. In 2002, Congress passed a law that declared that all students in all grades shall be proficient in math, reading, and science by 2014, but in 2012 most observers found that goal utterly beyond reach. Currently, the U.S. Department of Education has committed itself to ensuring that all students shall be college- or career-ready as they cross the stage on their high-school graduation day, another overly ambitious goal. Perhaps the least realistic goal was that of the governors in 1990 when they called for the U.S. to be first in the world in math and science by 2000. As this study shows, the United States is neither first nor catching up.

Consider a more realistic set of objectives for education policymakers, one that is based on experiences from within the United States itself. If all U.S. states could increase their performance at the same rate as the highest-growth states—Maryland, Florida, Delaware, and Massachusetts—the U.S. improvement rate would be lifted by 1.5 percentage points of a standard deviation annually above the current trend line. Since student performance can improve at that rate in some countries and in some states, then, in principle, such gains can be made more generally. Those gains might seem small but when viewed over two decades they accumulate to 30 percent of a standard deviation, enough to bring the United States within the range of, or to at least keep pace with, the world’s leaders.

Eric A. Hanushek is senior fellow at the Hoover Institution of Stanford University. Paul E. Peterson is director of the Harvard Program on Education Policy and Governance. Ludger Woessmann is head of the Department of Human Capital and Innovation at the Ifo Institute at the University of Munich. An unabridged version of this report is available at hks.harvard.edu/pepg/




Comment on this article
  • [...] Florida ranked second in the nation in gains on a national education test, according to a study released by Education Next. [...]

  • Richard Innes says:

    Ranking relative state progress on the NAEP by only looking at the “all student” scores can produce very misleading conclusions.

    Racial demographics have shifted in very uneven ways from state to state since NAEP began state assessments in the early 1990′s. Thanks to continuing racial achievement gaps in all states in the NAEP, a state like California, where the percentage of whites dropped from about 50% when state NAEP began to only about 25% today, is at a serious disadvantage in comparison to a state like Kentucky, where whites comprised about 90% of the enrollment in the early 1990′s and still account for 84% of the enrollment today.

    Uneven exclusion rates, especially for students with learning disabilities — and uneven changes in those rates over time — also may impact state rankings, as well.

    I suspect that if the analysis in the report was conducted separately by race, some very different impressions would result.

    I would challenge the authors to go deeper than simplistic “all student” analysis.

    For more, including interesting information that implies this new report may rank Kentucky’s progress too highly:

    http://www.freedomkentucky.org/index.php?title=The_National_Assessment_of_Educational_Progress

  • Dear Mr. Innes: The states that do the best job of lifting average student performance also do the best job
    of lifting the performance of higher achieving students and lower achieving students. If a state has more rapidly improving schools, it benefits students across the board.
    For details on this question, see our full report: Achievement Growth: International and U. S. State Trends. http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG12-03_CatchingUp.pdf

  • [...] performance in assessments of reading, mathematics and science since 1992,  according to a report from Harvard [...]

  • Richard Innes says:

    Prof. Peterson,

    The comments you made about high performing states lifting scores at both the bottom and the top are interesting, but that does not really deal with the issue of demographic differences between the states placing some at a disadvantage in simple comparisons of only overall “all student” scores.

    For example, when you use the NAEP Data Explorer to disaggregate the NAEP Grade 8 Math Scale Score data by race, it turns out that among the 33 jurisdictions that have scores for blacks for both 1992 and 2011, Kentucky does not rank anywhere near the top for score improvement. In fact, disregarding an additional problem of statistical sampling errors in NAEP scores that muddies the picture even more, Kentucky’s blacks only tie for 20th place for score improvement out of those 33 jurisdictions. At least for this subject and grade, that’s certainly a VERY different picture from the one your report portrayed.

    And, that leads to another issue. When I looked at the science data to do a similar analysis, I ran into some real problems.

    First of all, NAEP State Science didn’t start until 1996, not 1992.

    Furthermore, the NAEP Science Frameworks were radically altered for the 2009 assessment to the point where the NAEP Science Report Card for 2009 says the results are not comparable to earlier years.

    Complicating this even more, NAEP Science was only given in the 8th grade in 2011.

    So, there currently is no science trend line from the new framework science NAEP for 4th grade and only the thinnest possible, two-year trend for the 8th grade. There can be no consistent NAEP science analysis from 1992, or 1996 for that matter, to the present. The data simply does not exist.

    I have another concern. I cannot look at the 8th grade reading trend from 1992 to 2011, either. NAEP Grade 8 Reading didn’t start for the states until 1998.

    At present I am quite confused about how your team created the math, reading and science composite results shown in Figure 2 in the report and in other related areas of the document. At the very least, the labeling of Figure 2 looks problematic.

    Is there anything like a technical appendix planned for the report?

  • Scott McLeod says:

    Historically speaking, it seems as if there is little to no direct relationship between state/national test scores and economic performance. So why should we care about these state rankings?

  • Karen says:

    Florida showed gains because we were so low. The only place to go was up after being 49. Also, most schools in Europe, Asia and the Caribbean are based on the British system of placing students in schools by IQ levels after a countrywide test.

    Only those smart kids are doing any kind of testing that the US is now comparing itself to with kids who are in classes with IQ’s below 70, but are still expected to pass state tests. Imagine comparing the poorest school with the worst grade to the magnet school where only handpicked students with straight A’s attend. That’s the international school system. The kids who don’t plan on going to college attend different high schools. I know I had to go to the one my IQ indicated in my country. What I don’t understand is why all these top officials in education are not taking this into account unless they are making more money by blasting the school system;therefore, effectively ensuring job security with shoddy claims?

    Why are all these Asian students coming here in droves if our education system was so ghastly? They value education more than gold, but they keep coming. Alas, they are not allowed to freely move to other areas to attend better schools in their country, because their IQ level did not make it into those top schools. Someone need to do a better job knowing the dynamics of those countries before comparing how US students (some in wheel chair and feeding through tubes) to countries where you never see a handicapped child in public, much less school.

    Also, Florida changed the way it is being graded by including graduation rates and percent of standardized tests and classes attempted…….this whole thing is flawed.

  • Mr. Innes

    Thank you for your comments. You raise a good point that demographics have changed for the past 20 years. Therefore, being from California and also having looked at Kentucky in the past, I looked at the simple adjustment of grade 8 mathematics on NAEP. I only looked at scores for whites, blacks, Hispanics, and Asians (excluding Native Americans and the growing category of Other/multiple races). Because of these exclusions, the national and the state comparisons differ a little from the “all students” scores.

    Nationally, I calculate that 8th grade math scores would have risen 18.5 points instead of 16.4 for the 1992-2011 period if we weight by 1993 demographics. For California, this adjustment as you point out is a little larger: a projected 18.1 point rise instead of 14.4. And, both adjustments are slightly larger than for Kentucky which is 21.0 instead of 19.4. This does affect our calculations of average annual changes which would differ by 0.1 points per year for the nation, 0.2 points per year for California, and 0.1 points per year for Kentucky if based just on 8th grade math.

    At the same time, it does not seem to distort the overall picture that we present by very much. We do indicate in our report that demographic changes may be part of the change we observe across states. The 8th grade mathematics changes suggest that demographics might account for 10-20 percent of the change in some states. At the same time I do not believe that this would change the overall picture of highly variable rates of achievement change across the states.

    Eric Hanushek

  • Richard Innes says:

    Prof. Hanushek,

    As we discussed in separate communications, even if the racial demographics are held constant to the 1992 values, I think the California changes are a bit bigger than you calculate.

    However, a key point is that averaging all student scores together still hides some serious problems, and that appears to be true in the case of Kentucky.

    I took a quick look at the NAEP 4th and 8th grade math scores for blacks for 1992 and 2011 for all the states that had scores published for both years. Kentucky’s blacks did MUCH worse in both grades than the rankings you gave the state overall. Kentucky’s blacks missed that rising boat your report talks about.

    Bottom line: Reports that want to examine state to state performance with NAEP simply must look at disaggregated data. It’s no secret. All the recent NAEP report cards say this.

    You also should at least note differences in exclusion rates. Here is why.

    Kentucky’s very good performance in NAEP reading starts to look questionable once you realize the state led the nation in 2011 for its exclusion rate for learning disabled students.

    There is a reason for this situation, and it raises serious concerns about the reading ability of the excluded students.

    Since the inception of Kentucky’s reform assessments in 1992, it has been legal for learning disabled students to have the so-called reading tests read to them.

    As a consequence, Kentucky has absolutely no idea if these students have any ability what so ever to read printed text.

    Worse, the testing system actually created inducements for educators to simply carry these students through the entire K to 12 system as non-readers. Quite possibly, the vast majority of Kentucky’s NAEP-excluded students are total non-readers.

    As an experiment, try to estimate the reading scores for Kentucky assuming that all of the excluded students scored a zero on NAEP. I think your impressions of the state’s improvement will go though a rapid change if you try that.

    On a different methodology note, can you tell me which years and grades of NAEP science data you used for your analyses? The total number of NAEP assessments for math, reading and science your report mentions does not agree with the total number of administrations between 1992 and 2011, so I cannot determine what you used.

  • [...] United States is barely keeping pace with the rest of the world.  Within the United States, Iowa is last in terms of student achievement showing the slowest rate of improvement.  The study also demonstrated that our increase in education dollars have done nothing to help [...]

  • Karl Wheatley says:

    These data are interesting but it’s entirely unclear what they mean.

    Test scores can be raised for individuals and nations in ways that are helpful or harmful for our long-term goals.

    Many subjects and many of the goals that parents and employers value most are not in these data, and pursuing test score gains in a few subjects may directly undermine those goals (e.g., initiative, creativity, motivation, health). Thus, we can raise scores in ways that undermine kids learning and development and the nation’s future.

    States or nations may have seen these tests scores gains and losses for reasons entirely unrelated to educational quality, and since scores in a few subjects do not represent the range of our goals for children or our nation, there is no way to tell from these data which systems are effective of ineffective nor who is spending money wisely. Absent data on many other variables, there’s no way to tell who is getting more educational bang for the buck.

    People who study the details of teaching and learning for a living would note that the testing mania found in states such as Florida may raise scores while deforming learning itself. The meaning of test scores depends in part upon the conditions under which the learning took place, and so Florida’s scores may overestimate real learning while scores may underestimate real learning in states that have resisted the pressure to turn over education to high-stakes testing. We see this pattern in reading research, with decoding skills gained through intensive phonics instruction sometimes overestimating reading comprehension by three full grade levels.

    Unfortunately, the greater gains in the early years followed by moderate gains in 8th grade followed by virtually no gain at age 17 is a trajectory well predicted by critics of test driven education. For example, in reading, intensive phonics boosts test scores much faster at first, but any comprehension gains wash out in a few years (when compared to more holistic, child-centered approaches), and then no there are no advantages in reading comprehension in the long term. Meanwhile, the phonics approach–which is essentially test driven reading instruction for the early grades–creates all sorts of collateral damage, both for other child outcomes and for overall curriculum structure.

    If you nag your spouse into bringing you orange juice every day to prove they love you, the orange juice means something different than if no nagging was involved.

    The test scores have only very modest utility for telling us what we should be studying.

  • [...] Education Next: Is the U.S. Catching Up? [...]

  • Clay Forsberg says:

    You guys make a very detailed analysis of how the United States is faring in the race of standardized testing. I appears America is making some inroads here.

    I ask you though, make the connection between these test scores and future success of a student once they leave school. Yes they may learn and retain enough facts to perform on a test. But of what relevance are these facts they are learning … and in most cases not retaining for any length of time?

    The assumption here is that an increased knowledge of science, for example, will make them succeed in the real world. But will it help them master technology, the current and future lifeblood of increasingly connected world? Will it help them start the small businesses that drive our economy?

    We prepare our children for professional careers similar in culture to those we grew up with. We prepare them for jobs and career paths that no longer exist. The world of corporate security is gone. The jobs are different these days, but we forget the cultures are also.

    We should be teaching creativity and the ability to change and adapt. These are most critical skills in demand right now. But instead, our schools are going backward … focusing on conformity through the obsession with standardized tests and “keeping up with Jones” (i.e. China and India).

    Our schools have chosen to abandon the arts, the fields that promote extensive synaptic development. We have abandoned physical education, and in turn get a nation of obese children. We have chosen to emulate countries that embody jobs that focus on low salaries and robotic behavior instead.

    While your analysis is excellent for what it is. That’s about it. Ask any above average student in high school in America, and they will say the same things I’m saying here. They see no relevance in the material being taught. And because of this, there’s no engagement. And with no engagement there’s no learning … at least long-term.

    When are we going to listen to the people who have the biggest stake in the education process – the people being taught? We listen to our customers if we have a business, don’t we.

    Why is education any different?

  • John Merrifield says:

    My greatest concern about the outcomes of our school systems is diminished capacity to make sound political decisions. Bad public policies are a much bigger threat to our economic prosperity than reduced skill levels, and on top of that, bad policymaking threatens our liberty.

    And we need the NCES (or ??) to commit to better data for 17-year-olds, including those no longer in a school, and better data for aggregate outcomes for the system as a whole (public and private combined). That becomes more important as choice erodes the public share of the system.

  • Derrick Henderson says:

    It does not give the graphic’s for the year 2012. I would like to know how we are doing as a whole, as well as individual state’s in The United States of America…

  • [...] The list of such overly simplistic reports falling in this category include the recent “Quality Counts” report from Education Week, UK’s Center for Business and Economic Research’s “Issue Brief: Kentucky Ranks 33rd on Education Index,” Students First’s “State of Education, State Policy Report Card 2013,” a recent state ranking from the liberally oriented Prichard Committee for Academic Excellence’s latest update to their “Top 20 by 2020, 2012 Update” and even the conservatively oriented Education Next’s article, “Is the U.S. Catching Up?” [...]

  • Jason says:

    And yet American students created apple, Facebook, and Microsoft. All Three of the most powerful and influential companies in the world. Huh.

  • Kevin Small says:

    Stop pressuring kids to take so many one size fits all national standard test.Get back to what this great nation used to be more free,and less regulated might help.

  • Cynthia Mann says:

    I am a licensed clinical social worker for 35 years 22 of those in public edu., in California as a school counselor. One fifth of our kids live in poverty, 30+% drop out of high school. California is not spending enough, we are 48th out of 50 in per pupil spending. For each dollar we send to Feds we get 2cents. Instead of denegrating public edu, how about helping? The tests are testing the WRONG things. This is not science, testing parroting of sounds is not comprehension, arithmetic is not thinking mathematically . I would rather have second language learners, with social emotional intelligence launched into living life with the ability to think for themselves than have robot parrots!

  • Comment on this Article

    Name ()


    *

         20 Comments
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors