Quality Counts Should Stick to Education



By Guest Blogger Margaret Raymond 01/14/2010

4 Comments | Print | NO PDF |

The release today of Education Week’s annual report card for states, Quality Counts 2010, promises to receive extraordinary scrutiny as states gallop to the finish line with their Race to the Top applications, due January 19.  Quality Counts gathers a wide range of statistics about each state’s demographics, labor force, educational attainment and education inputs to describe “the condition of education” in the fifty states.  With the intense competition to win Race to the Top funds, states will no-doubt be poring over the rankings for confirmation of their worthiness.

CREDO assessed Quality Counts 2009 for Education Next, specifically focusing on the Chance-for-Success Index.  We found that the rankings for states closely mirrored their demography, which says nothing about the value-add of schools.  By bloating the various indicator groups with factors that are outside of the control of the education system, at least in the short term, the signal about education quality is fuzzed up with irrelevant information.  Worse, states and districts could be led in wrong directions when crafting policies and allocating resources.

When we constructed a more limited Chance-for-Success Index that included only those indicators that signal education quality – pre-school and kindergarten enrollment, 4th- and 8th-grade proficiency scores, and high school graduation rates – we learned that the rankings of states changed a good deal.  Simply put, when the focus is placed on the real effects of state and district efforts, a goodly number of states have a lot less to crow about.

We’ve taken a look at the same Chance-for-Success Index in Quality Counts 2010.  No surprise, this years index employs the same set of indicators, updated with more recent statistics from the U.S.Census Bureau’s American Community Survey.  But since those statistics changed only modestly since 2009, the scores on the Chance-for-Success Index change little:  the highest scoring state, Massachusetts, went from 94.6 to 93.3 and Nevada’s last place score went from 67.3 to 67.0.

More importantly, the shifts were largely driven by changes in household characteristics and employment.  So states whose Chance-for-Success ranks changed between 2009 and 2010 did so as a result of changes in family income or parental employment –factors related to the economic downturn, that is–not as a result of efforts to improve education.

Until the measures that are incorporated into the Quality Counts ratings are more clearly tied to education outcomes, we are likely to see continued shifts in rankings that bear little resemblance to actual changes in education quality.




Comment on this article
  • Evelyn Terry says:

    The fox never guards the hen house well.

  • George Mitchell says:

    What is the source of the 4th- and 8th-grade proficiency scores? NAEP or state-by-state determinations?

  • [...] Margaret Raymond over at Ed Next writes that variation in the report’s Chance-for-Success Index can almost entirely be explained by state demographic changes rather than changes in education quality. Here is the money quote. [...]

  • Write Essay says:

    Interesting blog

  • Comment on this Article

    Name ()


    *

         4 Comments
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors