Let’s Hear It For Proficiency



By 08/19/2013

1 Comment | Print | NO PDF |

Last week, Mike Petrilli proposed doing away with “proficiency” as the fulcrum of school-level accountability. He declared—as have many others over the years—that proficiency levels per se say more about a school’s demographics than its educational effectiveness and what really matters—he came close to saying all that matters—is the academic growth taking place within that school.

He’s surely right about the demographics. We know that suburban schools awash in upper-middle-class kids look good on proficiency-only measures even if those kids aren’t learning much from their teachers. We also know that some schools that produce impressive gains in their students—Mike’s example was Democracy Prep—still don’t look very good on the proficiency bar because those kids started so far behind.

All true—but not reason enough to abandon proficiency. Not, at least, so long as it matters greatly in the real world. Do you want the pilot of your plane to be proficient at take-offs and landings or simply to demonstrate improvement in those skills? (Do you want to fly on an airline that uses only “growth measures” when hiring pilots?) How about dining in restaurants that use only growth measures when selecting chefs? Having your chest cut open by thoracic surgeons who showed “gains” on their surgical boards but didn’t actually “pass” them?

Kids can show plenty of “growth” in school—and yes, we should laud schools that accomplish this—but still not be ready for college because they aren’t actually proficient. This is why absolute levels matter, too, and why schools should be judged in part by how many of the students emerging from them have reached true proficiency or, in today’s parlance, are truly college and career ready.

I first tangled with concepts of “proficiency” almost a quarter century ago when the newly formed National Assessment Governing Board (NAGB) set out to report NAEP results according to “performance levels” (a new statutory mandate) rather than simply on a numerical scale score. We intended, in essence, to draw a line across that scale that would respond to the question, “How good is good enough?”

The Board was also responding to an implicit mandate from the 1990 Charlottesville education “summit,” where the governors set “national goals” for U.S. education for the year 2000. The most significant of those goals declared that “American students will leave grades four, eight, and twelve having demonstrated competence in challenging subject matter including English, mathematics, science, history, and geography….”

Yes, they used the word “competence,” not “proficiency”, but they didn’t define it, much less say how to measure it. So NAGB set about to provide a means, at the national level and, in time, also at the state level, by which to determine how many youngsters at those key grade levels were indeed “competent” in various subjects.

Only we used the word “proficient.” That became the second of three “performance levels” (the others being “basic” and “advanced”) by which NAEP results have since been reported. (Why three levels? The late Albert Shanker had much to do with that decision.)

Critics had a field day, insisting that those lines on the scale were arbitrary, not scientific, that they weren’t validated by anything in the real world, and, especially, that “proficient”—which NAGB declared was the level that all students should attain even though it was evident that only about one in three was actually getting there—was too difficult, too high a bar.

Although that argument has never subsided, such critics faced a major setback the other day when NAGB itself, based on much research, determined that NAEP’s “proficient” level in reading in the twelfth grade actually correlates with preparedness to succeed in college-level academic work. (The level that corresponds to college preparedness in math is lower than “proficient” but considerably higher than “basic.”)

That looks much like validation—and, for me personally, vindication—of NAEP’s performance levels, at least as regards twelfth graders headed for college and not wanting to be sent into remediation when they get there.

At day’s end, that’s also why proficiency matters to students, teachers, and schools: because it matters to the world.

One more point: Mike began his argument with the assumption that many schools have scads of entering pupils who are already far below “proficiency” when they arrive. He had in mind middle and high schools—and there is no doubt that many such schools do indeed face a large remediation challenge with incoming eleven- through fourteen-year-olds who have already been gypped educationally in the early grades.

But is the remedy for that serious problem doing away with proficiency—or starting younger? Why would we persist in running an education system that allows this to happen in the early grades? Why isn’t it the solemn obligation of every school or CMO or school system to do right by kids from the get-go—from Kindergarten, for sure? If they come from disadvantaged circumstances, then add Kindergarten-readiness preschool to the package.

And isn’t that really the message of the Common Core standards—a vertically integrated set of academic expectations for what kids will learn from Kindergarten through twelfth grade? Properly implemented—and worked into modern-style accountability systems for students, educators, schools, districts, even states—that seems to me the right formula for turning “proficiency” into exactly what it should be: evidence of preparedness for what follows in life.

-Chester E. Finn, Jr.

This blog entry first appeared on the Fordham Institute’s Flypaper blog.




Comment on this article
  • diane watson says:

    Makes perfect sense. My only concern is that now it seems we have gone overboard with testing students for proficiency. Before students were tested in key grades, 4th, 6th, 8th 11th, as mentioned in the article. Now students are tested every year. There are too many tests too many times. It’s overkill! Teachers and students now have to spend much of the year preparing for the tests. Students don’t have time to learn what they need for college and career or to do well on any tests. You need those in-between years to learn- make mistakes/improve- without your performance being examined with a microscope all the time!!

  • Comment on this Article

    Name ()


    *

         1 Comment
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors