What Can We Learn about Learning?

Bror Saxberg, the chief learning officer of Kaplan, Inc., is a man for whom I have great respect. Whenever I have a question about the science behind learning, he is the first person I turn to. He verses himself in the latest in cognitive and neuroscience research and applies his multiple degrees to great use.

When he forwarded me his recent blog titled “What to learn from a learning grant process,” I dove in with some excitement as he talked about his work helping review science and math grant applications for the Institute of Education Scienceswithin the Department of Education and posed some bigger questions and comparisons with health care. It’s worth the read.

Given that he knows so much more than I do about these topics—this is not my area of expertise even though I read about and am fascinated by the science behind learning—it is with some trepidation that I therefore am wading in to respond to his blog. I’m not sure he’ll disagree with anything I say here, but I figure that I should tread carefully.

What struck me first is that among the five different types of projects that IES funds, none specifically focuses on theanomalies to “what works.” There are exploration grants, development grants, efficacy grants (think randomized control trials), scale-up grants, and assessment grants (all important as we need more data!). But there are no grants per se that focus on the anomalies within randomized-control trials or scale-up grants (although Bror tells me there is some scope within the IES grants to explore such moderating and mediating factors). That is, when we conduct a research trial and the “active ingredient” works here but not there, it would be useful to explore what was it that was different about the circumstances between those two places that caused the results to be different. Or if the active ingredient on averageproduces, say, 70 percent better results than the control, could we run an enriched trial focusing on those individuals for whom it didn’t work to learn what was different (and, ideally, move beyond simple attributes of correlation to understanding actual causation)? Although one active ingredient might be the most efficient way to learn something, perhaps certain students aren’t motivated to pursue it and therefore do not do it, but there is a less efficient pathway to learn something that those students will actually commit to and therefore proves to be more efficacious on the ground.

This is how breakthroughs in theory in general—not just learning theory—occur. Anomalies aren’t something to be explained away through statistics or claims of lack of fidelity in the implementation, but instead are to be welcomed, as, over time, they allow us to create circumstance-based theory—so we know what actions will lead to what outcomes in different circumstances and progress beyond the flawed notion of “best practices.”

The lack of this in education research worries me when it comes to things like digital learning and the What Works Clearinghouse. Why? Although the notion behind the What Works Clearinghouse is important and a step forward, because the methodology is focused on gold-standard randomized control trials, I worry that it masks the customized promise of digital learning. Just because a given intervention doesn’t work better on average than the control, doesn’t mean that it isn’t working significantly better for a given subset of a population or under a particular circumstance—and vice versa for the control as well.* Randomized-control trials are important (and an improvement over where we have been), but they are not enough and if seen as such, can sometimes lead us astray in education—or in health care for that matter as we move away from empirical medicine toward personalized medicine.

Which brings me to Bror’s next series of questions around the fact that there is actually a lot of good research on the science of learning out there already, but why aren’t there organizations dedicated to doing larger and vastly more complicated trials to scale this up into something meaningful? To answer this, he turns to the world of health care, where such organizations do exist. He points out that they exist because there is a vast market that is desperate to pay for the successful products that emerge from such research trials—so it allows the research to be profitable—whereas this does not exist in education.

People who have read my writing before know that I broadly agree with this assertion, although there is one subtlety that’s worth pointing to. The health care industry today profits from sickness, not from health, as my colleagues Clayton Christensen and Jason Hwang wrote in The Innovator’s Prescription. Therefore, as Bror points out, there is a huge market for blockbuster drugs, which has historically financed the large-scale research trials to which he refers. But there is a considerably smaller market for those profiting from wellness, which is where many think we need to head to improve our health-care system and make it more affordable. The system as it is constructed today doesn’t reward this. What’s interesting is that integrated health providers, those that integrate the payer and provider function, seem to have created a system where there incentives in fact align around benefiting from people being healthy, as Dr. Hwang recently wrote. Here these players aggressively do things that prioritize preventative medicine and the like, as it keeps costs down and improves outcomes. What this suggests is incentives do matter—and if we can get them right in education, we might see a sea change.

Instead, consider the system of public education today where there is no real mechanism to concern the industry with outcomes and productivity and therefore encourage it to demand things that will help it improve. We have written about moving the system from a focus on inputs to outputs and lastly to outcomes so as to fix this demand side. But we must admit that even as there have been changes over the past couple decades and people have become more concerned with outcomes, as Tom Vander Ark writes in his new book Getting Smart, all we really have done is layered these new outcome expectations on top of a system laden with prescriptive commands on how to do things on the ground. And this prevents those things from the research that seem to work from receiving their due notice on the ground.

Education providers don’t really benefit from being better, and they haven’t largely gone out of business for being worse. Bror suggests—without our language and without evidence perhaps, so I’m using my words here—at the end of his blog that Kaplan has now amassed such an integrated system whereby it will benefit from applying evidence-based principles to improve outcomes and drive down costs. I’m not sure if the incentives are truly aligned properly for this to happen, but it’s worth staying tuned to see if—and hope that—he is right.

-Michael Horn

*To take it a step further, I worry even more about researchers who suggest we need to do more research on blanket things like “online learning,” which ignores the reality that there is an incredible diversity in online learning and the causal mechanism in most circumstances is not whether the learning is online or not, so for starters, (1) the question is virtually meaningless, (2) we know that it serves just fine for some students already, as they have graduated and are doing well in life, and (3) that through paying based in part on outcomes, we can provide proper opportunities based on different needs.

This post originally appeared at Forbes.com

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College