I’ve long had a sneaking suspicion that DARPA-produced AI bots are editing journals of education research. Crazy, you say? Well, it would certainly explain the esteemed Review of Research in Education’s annual call for proposals on “Equitable Educational Systems That Cultivate Thriving.” (They’re due this week, if you were wondering.) I mean, the call is pretty clearly the handiwork of a poorly trained AI, with the “editors” explaining that they’re seeking . . .
Scholarly work that provides critical perspectives on educational equity, wrestling with the ambiguities, paradoxes, and tensions associated with its conceptualization and its historical and everyday applications . . . [and] the different ways in which we conceptualize equity to formulate a robust multifaceted definition and advance policies and practices that build capacities of the institutions, families, and communities in which children and youth are located.
“Why would DARPA test AI bots on education journals?” you ask. C’mon, now. The publications are already incomprehensible, and nobody reads them. What a brilliantly innocuous place to pilot faulty tech. In this case, though, the “editors” of Review of Research in Education stumbled. They penned an extended seven-page project description which gave the game away. By pulling verbatim takes, that much material allows us to see whether they write like humans . . . or glitchy AI. (To quote Dave Barry, “I’m not making this up.” Really.)
This is primitive AI, right? Maybe a defective early version of ChatGPT? You needn’t be a conspiracy theorist to doubt that living, breathing human beings produced this stuff.
Look, while I get the argument that a barely read publication is a harmless place to pilot defective AI, I think that’s naïve. Education, truth, and the search for understanding matter. And we honor those things through clarity of word and thought, by enabling readers to grasp our meaning, weigh our claims, and appraise our assumptions. Transparency fosters healthy discourse and constructive debate.
An unintelligible word salad does something very different. It sows confusion. It obscures dubious claims. It treats words as tribal markers. And, along the way, it divides the world into those who have and haven’t learned the shibboleths buried in the garbled jargon. You know, I may be wrong that we’re dealing with low-grade AI here. After all, this is exactly the sort of thing that sophisticated, malicious AI would seek to do.
Uh-oh.
Subscribe to Old School with Rick Hess
Get the latest from Rick, delivered straight to your inbox.
Frederick Hess is an executive editor of Education Next and the author of the blog “Old School with Rick Hess.”