Hewlett Assessment Competition Comes at Critical Time



By 01/11/2012

2 Comments | Print | NO PDF |

As online learning gains share and transforms our education system, for some time I have argued that foundations and philanthropists would be wise to spend their dollars in moving public policy, creating proof points, and the like to create smarter demand and not invest on the supply side in the technology products and solutions themselves.

The market is plenty motivated to create disruptive products and services to serve the public education system, but today’s policies and regulations don’t incentivize and reward those products and services that best serve students. As a result, philanthropic dollars are critical to help create the correct conditions such that those products that are efficacious and serve a higher end—student learning—are the ones that gain share.

As we’ve argued, public policy should reward those providers that best deliver student outcomes—and punish those providers that do not serve the public good.

There is one area, however, where I think philanthropic dollars should probably fund products and services, which is in the category of assessments. If we’re going to have a system that pays providers on how students do on outcome measures, we need robust assessments that are authentic and that people trust. The political incentives—for a variety of reasons—to create high-quality assessments aren’t particularly strong, so having philanthropists invest dollars to create these assessments and continue to push innovation is critical.

This is why yesterday’s announcement that The William and Flora Hewlett Foundation will award a $100,000 prize to the designers of software that can reliably automate essay grading for state tests to drive testing of deeper learning is so important. Open Education Solutions and The Common Pool designed and will be managing the competition.

The Hewlett Foundation’s leadership in creating better assessments to measure critical reasoning and writing is a big step forward—and its use of Kaggle, a platform for predictive modeling competitions, to host the competition is clever.

According to the press release, “The automated scoring competition intends to solve the longstanding problem of high cost and low turnaround of current testing deeper learning such as student essays. The goal is to shift testing away from standardized bubble tests to tests that evaluate critical thinking, problem solving and other 21st century skills.”

In addition, the competition is being conducted with the support of the two state testing consortia that are currently designing the next-generation assessments for the Common Core. Having this buy-in and collaboration gives the competition serious validity and the potential to have real impact.

-Michael Horn




Comment on this article
  • Gary Cheng says:

    Interesting article, Michael. Have you seen this recent post in EdWeek’s Curriculum Matters Blog? http://blogs.edweek.org/edweek/curriculum/2012/01/curriculum_the_missing_ingredi.html. Author Beverlee Jobrack just published a book making what seems like a strong case for exactly what you’re saying at the beginning of your post—we need to create a smarter demand in the marketplace for more *effective* products. Once effectiveness (on student outcomes) is the competitive issue, then the organizations that create products and services will quickly follow with more and more effective products/services. It looks like you two are very much on the same page. (I’m in complete agreement, too, by the way.) Thanks for your article.

  • Michael B. Horn says:

    I hadn’t seen it. This is quite helpful. Thanks for pointing me to it.

  • Comment on this Article

    Name ()


    *

         2 Comments
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors