Top Ten Takeaways: Common Assessments (Part 1 of 2)

The three-part series of interviews on the nation’s move to Common Core–aligned assessments was as edifying as I could’ve hoped. USED, PARCC, and Smarter Balanced offered meaningful information on the current state of play and clear indications of what’s on the horizon.

I’ve pulled out a “Top Ten Takeaways” from the exchange. Today, we’re posting #10–#6 (they’re in rank order so #1 is most important). Next  we’ll post #5 – #1.

10.   Competition with the testing industry is GAME ON!

In ways subtle and not, the responses sought to differentiate the consortia’s efforts from the testing industry.

It seemed like the Department’s interest was in drawing a line between the old and the new. Why’d they spend $330 million on new tests? Because, USED says, governors and state chiefs asked them to do so.

Why did states make that ask? “Because the market was not meeting their needs.”

According to the feds, the consortia are building “next-generation” tests that “will offer significant improvements directly responsive to the wishes of teachers and other practitioners: they will offer better assessment of critical thinking, through writing and real-world problem solving, and offer more accurate and rapid scoring.”

“We expect the consortia to develop assessment systems that are markedly better than current assessments.”

The implication is that the testing industry had come up short.

SB said its new tests would offer a “quality benefit”; SB’s transparency is “antithetical to the competitive nature of commercial test publishing”; states will now have more control over tests; and tests will have more high-quality items.

PARCC argued that “through the consortium, states are able to ensure a higher-quality assessment than any individual state could by itself.” If states drop out, they “will likely use lower quality tests to assess the CCSS.”

PARCC was also clear that the consortia had the ability make the testing industry better: “The power of states working together is going to move and improve the entire testing industry” and “The consortia assessments are our best chance to move the testing industry towards innovation and quality.”

What does PARCC think about testing companies trying to steal away its members with big promises? “The state chiefs have been hearing this sales pitch for years, and they are wise to the ways of the traditional testing industry.”

Bam!

Have no doubt: This is true-blue competition.

I’m sure the consortia believe everything they’re saying, but have no doubt, they’re also talking down their competitors.

They’ve probably had an inkling that the testing companies were quietly looking to pick off states. But last week’s hugely revealing Ed Week piece on ACT removed all doubt as the company declared, “We are Plan B.”

Ka-pow!

The testing companies’ capturing states would be a coup: beating out two consortia of states that were buoyed by federal money and given several years of lead-time to get their offerings right. The consortia don’t want to lose members, and they certainly don’t want to have to explain why they got $330 million of taxpayer funds if the market was going to produce something states wanted and without government money.

It seems the consortia are working overtime to maintain their respective market shares. When asked about the possibility of Florida’s and other states’ exodus, PARCC summed things up extremely well.

Yes, we think they’ll stay with us, “[b]ut we know there are no guarantees. That is why we are working hard to produce the highest-quality assessment that reflects the needs of PARCC states…Our job is to make sure that PARCC remains ‘Plan A’ for Florida and every other member state.”

Game on.

9.   Technology as a major issue

Both consortia concede that while some states are ready to give online tests, some are not. Both are confident states will get there. (As a precaution, both are providing a paper-and-pencil option.)

To help the cause, SB announced that old operating systems and processors and limited memory will be sufficient to administer its assessments. With PARCC, it developed a “technology readiness tool” that allows schools and districts to track progress.

Making the infrastructure upgrades necessary and procuring the needed devices is a huge lift for states. Moreover, online tests come with their own challenges.

As states list the pros and cons of staying in the consortia, tech issues will be front and center.

8.   The complexity and consequences of coordination and consensus

Both PARCC and SB discussed the difficulty associated with so many cooks in the kitchen. Completing tasks requires so many different actors across so many different states, and so many stakeholders want to play a role. In PARCC’s words, “There are thousands of state leaders, local educators and postsecondary leaders, administrators and faculty who are engaged in developing the PARCC assessment system.”

SB wrote about the challenge of “responding to the intense—and legitimate—interest of so many diverse parties in this work.” Moreover, “Keeping this diverse array of interested parties informed about the complex and often highly technical work of building an assessment system has been more challenging than we originally imagined.”

This is noteworthy because these challenges have arisen prior to the high-stakes state-level decisions just ahead. As states approach crucial go/no-go calls (budgeting for the new tests, ending contracts for existing tests), the gaps between the consortia’s decisions and each state’s preferred paths will be magnified.

In other words, a number of states might opt out partially because they didn’t get their ways on some number of issues.

7.   Smarter Balanced has its act together

I follow the assessments transition closely, and even I didn’t realize how far SB had come. Perhaps they are just really good at telling their story, but I walked away from their submission convinced they are running a pretty tight ship.

They’ve hit their project milestones for delivering summative, interim, and formative assessments. The estimated cost for their full formative-interim-summative package is less than what most of their members currently pay.

This coming year, they’ll pilot 5,000 items and tasks with about a million students. This month, they plan to release a complete set of practice tests for each subject and grade level. They’ve done small trials already.

Finally, they believe all of their governing-board members are fully committed (that is, not flight risks). In a recent survey, all but one of their states indicated plans to use the full suite of tests; the other plans use only the summative assessment.

Not too shabby.

6. But what will be the quality of SB’s act?

In so many ways, Smarter Balanced gave the impression that their process has been (and therefore their final product may be) business-as-usual. Time and time again, I found myself saying, “Wow, this sounds traditional.”

Consider SB’s verbatim language:

  • The process Smarter Balanced is using is very similar to the processes that states have been using for over a decade to create assessments for NCLB accountability”
  • “To date, our work has been supported through contracts with every one of the country’s large testing companies”
  • “The test-development process Smarter Balanced is using follows a sequence of steps that is familiar to all experienced assessment professionals.”

More than just using familiar processes, SB seems to have done everything possible to generate consensus among its countless stakeholders—such language is throughout their response. Of course, there’s nothing wrong with this in principle—in fact, it’s laudable—but it does raise the specter of lowest-common-denominator-itis.

Take for example, SB’s “innovative approach” to setting cut scores. Concerned that member states might not feel “adequately represented” by just participating in workshops, SB has created a crowd-sourcing mechanism so just about everyone can weigh in on what proficiency means. Will that lead to tough cut scores or widely accepted cut scores (BIG difference)?

Perhaps SB’s most revealing response along these lines is the following: “While the process that is being used to develop the Smarter Balanced assessment system would be familiar to anyone who has ever built a test, what is unique about Smarter Balanced is the bringing together of a large and diverse array of talent committed to making each element of the system ’best in breed.’”

That sounds to me like things aren’t going to be different so much as consensus based and better (though my colleague Kathleen Porter-Magee might question the “better” part).

Maybe this is the right course of action. It’ll keep people together and keep everything on pace. The product will probably be evolutionary, not revolutionary.

I guess my overall take is this: I’m certainly more confident than before that SB will successfully deliver something reputable and on-time. For that, they deserve a tip of the hat.

However, I just find it hard to make the case, based on what I read from them, that it will be “next generation.”

But I may be wrong. Time will tell.

-Andy Smarick

This blog entry first appeared on the Fordham Institute’s Common Core Watch blog.

Part 2 of Top Ten Takaways appears here.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College