The Trouble With Economists

Some of my best friends are education economists. That’s right. Economists have added a whole lot to the education discourse in the past decade. They’ve shed light on dubious assumptions and frequently brought a healthy rigor, one that was too often missing in the ’80s and ’90s.

But economists bring pathologies of their own. These were highlighted in a letter that the Washington Post published earlier this month from Nobel Prize-winning economist James Heckman. Irate that columnist George Will had questioned the benefits of early childhood education in a throwaway line in paragraph 15 of an op-ed, Heckman wrote, “Those benefits, quoted by President Obama this year, come from my evidence-based analysis of more than 30 years of data from the Perry Preschool program… It is as good a trial for effectiveness as those we currently rely on to evaluate prescription and over-the-counter drugs.” Heckman went on to specify that the return on early childhood spending is “7 percent to 10 percent per child per year” and to dismiss Will as “indifferent to information.”

The thing is that unlike, say, over-the-counter drugs, it turns out that preschool programs are hard to replicate with fidelity or in such a way that each additional preschool student gets the anticipated benefit. The benefits of a therapeutic regime are fairly straightforward to replicate, when it’s a matter of administering this drug, to identifiable clients, under specified conditions. It’s not similarly clear just what the Perry Preschool “intervention” was. Is it just going to any preschool? (Nah. Even Heckman doesn’t think so.) He thinks it’s going to a “high quality” preschool. Okay… but what exactly does that mean, and how readily can we ensure that the preschools being funded are high quality and not low quality? That’s where things get sticky.

After all, as with any number of heralded pilots that have disappointed at scale, it’s possible that the benefits of the boutique Perry program were due to an enthusiastic staff, high-touch attention from experts hoping to see positive results, community buy-in, the Hawthorne effect, or any number of tough-to-scale factors. Decades of evaluations on Head Start have found nothing like the benefits of the tiny Perry program. Nonetheless, seeking a universal truth, Heckman asserts that positive results from a small-scale trial, conducted half a century ago, “prove” what policymakers should do with hundreds of billions today — and never mind the political, organizational, or practical complexities of trying to replicate that success in the real world.

As the number and sway of edu-economists has grown in recent years, so has my concern with the pathologies they can bring. I’ve always been particularly sensitive to three.

First, they need outcomes they can measure clearly and precisely. This limits things pretty severely, and has fueled the reification of test scores and earnings data (although, in the case of Perry, the passage of decades made it possible to track a set of more interesting indicators). In their write-ups, some economists toss in a paragraph conceding that we care about other stuff besides test scores and earnings, but (lacking data points) those disclaimers tend to fizzle away without a trace.

Second, many economists have a funny habit of assuming the numbers they’re playing with are unimpeachable representations of reality. Now, some terrific scholars deserve special commendation for getting their hands dirty in the process of digging down to collect data, and acknowledging all the inevitable messiness. But plenty of their peers find it easier to ignore the complexities, or to simply reanalyze existing data sets without ever poking into the reliability and validity of the numbers themselves. Thus, results get reported as compelling because the statistical relationships are strong, even if the underlying numbers might deserve further scrutiny.

Third, economists are trying to distill universal truths, like physicists or chemists. For their data to unearth such relationships, they need the effect of A on B to be steady over time or across programs. In the real world, however, I’m not at all sure that those effects are static. So, the effects of this vocational program may not tell us very much about the effects of that program, and the benefits of completing a course labeled Algebra I in 1978 may not tell us much about the benefits of doing so in 2013. Yet, the entire “science” of economics is predicated on the stability of these relationships.

Economists have a valuable role to play, and it’s one that they’re playing much more effectively today than used to be the case. But, like Heckman, it’s easy for them to overestimate the finality of their findings and to extrapolate blithely without acknowledging the limits of their wisdom. It’s equally tempting for policymakers or advocates who like a given finding to play the same game, treating complex econometric analyses as proof positive. We’d do better to greet these analyses with less deference, more skepticism, and more questions about the practical implications.

-Frederick Hess

This blog entry first appeared on Rick Hess Straight Up.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College