A few months back, I lamented the disconnect between academe and the nation’s educational leaders and policymakers. It has unfortunate consequences. I recently had a front-row seat for an illuminating display. Earlier this month, Paul T. von Hippel penned a terrific piece for Education Next explaining how exaggerated claims about the miraculous powers of tutoring can be traced to a dubious “two-sigma” effect postulated by psychologist Benjamin Bloom 40 years back.
Now, I’ve been writing about the promise of tutoring since at least 2013, so I’m no naysayer. But I’ve spoken in recent years to a lot of well-meaning educators and officials who’ve launched dramatic efforts based on grand assurances. Indeed, it’s all too evident that many educational policymakers, advocates, leaders, and techies got the “two-sigma” spiel stuck in their heads and that this has yielded dubious decisions and insufficient attention to implementation.
I saw in von Hippel’s piece a prime opportunity to help recalibrate expectations. So, last week, I penned a Forbes column observing that the post-pandemic tutoring boom may have been boosted by “suspect science.” I noted that rigorous research suggests that the likely benefits are probably only a fraction (less than one-fifth) of what many educators and public officials imagined they’d been promised.
Well, a number of academics took umbrage at the Forbes column. Their complaint? The gist seems to have been that the “two-sigma” Bloom stuff is 40 years old, there’s more recent research that shows real (if substantially smaller) benefits from tutoring, and therefore it was misleading or mean-spirited to harp on von Hippel’s critique of Bloom. (Again, they made it seem like I was trying to besmirch tutoring—which I found odd given how much I’ve tried to look at the potential benefits of tutoring and what it takes to deliver effectively here, here, here, and here.) The thinking seemed to be that, since there’s a good 2020 meta-analysis from the National Bureau of Economics Research which finds real benefits of tutoring (as I noted in the column), everyone already knows what the latest credible research has to say.
Talk about a charmingly detached view of things. While I get why academics might view a four-year-old NBER paper as something that should be common knowledge, it isn’t. And the various experimental studies that the meta-analysis incorporated? They’re not common knowledge either. Indeed, it’s safe to say that few (if any) educational leaders, advocates, or policymakers have read any of the stuff that my irate correspondents deemed widely known.
More to the point, there is still broad familiarity with the “two-sigma” claim—even if those who’ve absorbed it don’t know where it comes from. They just tend to assume it reflects the state-of-the-art research. Von Hippel noted how Sal Khan used Bloom’s finding as the title of his 2023 TedX talk announcing the launch of his AI-driven tutor-bot, Khanmigo.
In a February episode of the wildly popular “Ben and Marc Show” podcast, tech investor Marc Andreessen asserted that “there’s one educational intervention technique that reliably yields what are called two-sigma better outcomes.” What was it? “One-on-one tutoring,” of course.
A few years back, the Bill and Melinda Gates Foundation and the Chan Zuckerberg Institute teamed up to pursue personalized-learning strategies that could boost math achievement. Drawing on Bloom, they wrote that their goal was, yep, “at least a two standard deviation improvement” in “mathematical proficiency.”
Where have these influencers and decision-makers gotten their two-sigma conviction? At conferences, in conversations with vendors, or on webinars with grantors and enthusiasts. Basically, it’s in the tap water.
Subscribe to Old School with Rick Hess
Get the latest from Rick, delivered straight to your inbox.
There are a few useful lessons in all this.
First, academics need to acknowledge that having some expertise on a topic (like tutoring) doesn’t mean they know everything related to that topic. You think that everyone knows the effect size of tutoring may be more like 0.35 standard deviations than 2.0? Okay. I think you’re sorely mistaken, but I’d be curious to hear why you think that. Such an exchange, though, would require academics to concede that their familiarity with research design doesn’t necessarily mean they’re familiar with things like the reading habits of educators or public officials.
Second, it would be swell if more people in education were versed in the findings of valid, reliable research. But it’s also good to recognize that, unlike academics or think tankers, superintendents, tech developers, and legislators have jobs that may not allow a lot of time for perusing scholarly journals or NBER working papers. And they may not have a lot of training in research or statistics, making it tough for them to decipher the jargon. That’s why it’s useful for researchers to explain things again, and again, and again in accessible language. Indeed, it’d be nice if the academics irate about the attention I devoted to von Hippel and Bloom were to turn that same energy to educating the public about what tutoring can and can’t do.
Third, academics hoping to impact kids and schools do well to focus more on what’s understood and less on what’s been reported. Look, I wholly get the desire to imagine that once something has been written, it’s known. But that’s not the way the world works. Several scholars seemed genuinely offended that I’d focused on Bloom’s “suspect science” when there’s newer, better, more relevant research. Well, they know that newer research. I like to think I’m reasonably familiar with it. But you know who doesn’t know it? Most of the people who make the decisions that matter for kids and schools.
I found the whole episode to be an illuminating little moment.
Frederick Hess is an executive editor of Education Next and the author of the blog “Old School with Rick Hess.”