The Five-Tool Policy Scholar

This post also appears at Rick Hess Straight Up.

Later today I’ll be publishing the first annual RHSU Edu-Scholar Public Presence Rankings.  First, I want to take a few moments to explain what those ratings are about and how they were generated.

I start from a simple premise: recognition matters. I think people tend to devote more time and energy to those activities which are acknowledged and lauded. The academy today does a passable job of recognizing good disciplinary scholarship but a pretty mediocre job of recognizing scholars with the full range of skills that enables them to really contribute to the policy debate. This may have minimal import for scholars of Renaissance poetry, but it’s a real problem when trying to encourage accomplished researchers or promising young academics to wade responsibly into public debates.

In baseball, the ideal is the “five-tool” ballplayer. This is a player who can run, field, throw, hit, and hit with power. Ballplayers can be great if they excel at just one or a few of these, but there’s a special appreciation for those with a full suite of skills.

Among scholars who do policy-relevant research, there’s a crying need for something similar. The engaged policy scholar is a “five-tooler” in her own right. As I see it, the extraordinary policy scholar excels in five areas: disciplinary scholarship, policy analysis and popular writing, convening and quarterbacking collaborations, providing incisive commentary on topical questions, and public speaking. After all, it’s the scholars who are skilled in most or all of these areas who can cross boundaries, foster crucial collaborations, and bring research into the world of policy in smart and useful ways. The academy, though, treats many of these skills as an afterthought–if not an outright blemish on a scholarly record! And while foundations fund evaluation, convening, policy analysis, and dissemination, very few show any inclination to make any particular effort to develop multi-skilled scholars or support the whole panoply of activity.

Today, there are substantial professional rewards for scholars who do hyper-sophisticated, narrowly conceived research, but little institutional recognition, acknowledgment, or support for scholars who carry their efforts into the public discourse. One result is that the public square is filled by impassioned advocates, while silence reigns among those who may be more versed on the research or more likely to recognize complexities and hard truths. Indeed, as I know only too well, wading into the public debate can anger friends and call forth vituperative personal attacks. One can hardly blame academics for avoiding all this unpleasantness by remaining swaddled in the pleasant irrelevance of the ivory tower. One small way to encourage academics to step into the fray and to push back on the academic norms fueling the status quo is, I think, to try harder to recognize the value of engaging in public discourse and the scholars who do so.

With that aim, the Public Presence rankings will offer one way to gauge the broad impact of scholars who focus on education and education policy and how they are impacting the public discourse. Public Presence scores really reflect three things: articles and academic scholarship, books of a more or less academic bent, and visibility in the new and old media. Broadly speaking, the scores generally draw about 40% on scholarly influence in terms of bodies of work and citation counts, 25% on book authorship and current book success (which frequently overlaps with scholarly work), and about 35% on presence in new and old media, as well as in the Congressional Record.

Readers will note that these rankings ignore things like teaching, mentoring, and community service in order to focus on a scholar’s public impact. Such is the nature of things. These scores are not imagined as a summative measure of a scholar’s contribution to teaching and knowledge. Rather, they are a counterpart to traditional publication-heavy measures of research productivity–or scores on RateMyProfessor.com–which also tell us something, but don’t offer much insight into how scholars in a field of public concern are contributing to the public discourse.

The RHSU Public Presence Scoring Rubric

I opted to employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limited the nuance and sophistication of the measures. More specifically, the scoring is determined as follows:

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, commonly used technique for this is to tally the scholar’s works in descending order of how often they are cited, from most cited to least cited, and identify where the number of works is finally exceeded by the cite count for the least frequently cited article. For instance, a scholar who had 10 works that were each cited at least 10 times, but whose 11th most frequently cited work was cited just 9 times, would score a ten. A scholar who had 27 works cited at least 50 times, but whose 28th work was cited 27 times or fewer, would receive a 27. A new assistant professor will typically have a number like zero or one, while the top scorer in the rankings posted a 68. The search was conducted using the scholar’s name under the “author” filter in an advanced search in Google Scholar, with the search limited to the “Business, Administration, Finance, and Economics” and “Social Sciences, Arts, and Humanities” categories. A hand-search culled out works by other, similarly named, individuals. While Google Scholar has its flaws and is less precise than more specialized citation databases for such a search, it has the virtues of being multidisciplinary and publicly accessible. This category ultimately counted the most–amounting to between 25% and 60% of the score for most scholars–as it’s a quick way to gauge both the expanse and influence of a scholar’s body of work.

Book Points: An author search on Amazon was used to tally the number of books a scholar had authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, and a half-point for co-authored books where they were not the lead author or for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial was used to avoid duplications with authors who had the same name, e.g. “David Cohen” became “David K. Cohen,” and “Deborah Ball” became “Deborah Loewenberg Ball.”) The “format” was “Printed Books” so as to avoid double-counting books which are also available as e-books–and, as in each category, a hand-search sought to guard against double-counting and to ensure an accurate score. Amazon-available reports and articles were excluded, as was any source listed as “out of print”–only published, available books were included. The high score in this category was 35.5, but more than 90% of scholars scored between zero and 20.

Highest Amazon Ranking: The author’s highest-ranked book on Amazon, as of December 15. The highest-ranked book was subtracted from 200,000, and that figure was divided by 10,000 to derive a point total of somewhere between zero and twenty. This score, due to the nature of Amazon’s ranking algorithm, is highly volatile (a book can rise and fall with substantial rapidity) and necessarily biased the results in favor of more recent works. Books published in late 2010 had a decided advantage over books published earlier in the year, for instance. Similarly, a book may have been very influential in the 1990s, and it might show up in terms of citations and the likelihood that a scholar is quoted in newspapers, but would likely fetch an author no points on this front in 2010. The result was a decidedly imperfect way to readily gauge the impact of recent books, but one that appeared to add value and convey relevant information. About a third of the scholars examined, and eight out of ten top finishers, scored points in this category.

Education Press Mentions: The total number of times the scholar was quoted or mentioned in Education Week or the Chronicle of Higher Education between January 1 and December 15, 2010. The search was conducted using each scholar’s first and last name. To norm the value of this category, this figure was divided by 2 to calculate Ed Press points. Scores in this category ranged from zero to 19, with most falling between zero and ten.

Blog Mentions: Based on a search using Google Blogs, the number of times a scholar was quoted, mentioned, or otherwise discussed in blogs between January 1 and December 15, 2010. The search was conducted using each scholar’s name, plus their affiliation (e.g. “Bill Smith” and “Rutgers”), so as to avoid accidentally counting other individuals with similar names. Because blogging can tend towards the informal, the blog search also included the most common diminutive for a given scholar (e.g., “Rick Hanushek” as well as “Eric Hanushek;” “Pat McGuinn” as well as “Patrick McGuinn”). To norm the value of this category, points were calculated by dividing this figure by four. The high score in this category was 51.3, but more than 90% of scholars scored between zero and 20.

Newspaper Mentions: Based on a search using Lexis Nexis, the number of times a scholar was quoted or mentioned in U.S. newspapers between January 1 and December 15, 2010. Like Blog Mentions, the search was conducted using each scholar’s name plus their affiliation. To norm the value of this category, points were calculated by dividing this figure by four. Scores ranged from zero to 14, with most falling between zero and ten.

Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2010 to determine whether a given scholar was called to testify or if their work was referenced by a member of Congress. The reference or testimony had to have occurred on or before December 15. If a scholar was included in either capacity, they received five points in this category.

There are obviously lots of provisos in making sense of the results. Different disciplines approach books and articles differently. Scholars of K-12 and higher education may have differing opportunities to engage in the public square. Senior scholars have obviously had more opportunities to build bodies of cited work and to author books.

More generally, some readers may have more use for some of these categories than for others. That’s fine by me. The whole point is to encourage discussion and debate about the nature of responsible public engagement, how different folks fare, how much these things matter, and how to accurately measure a policy scholar’s value.

Two questions that are sure to come up: First, given that the metrics are public, can somebody game this rubric in future years? Second, am I concerned that this exercise will encourage academics to be publicity hounds? As far as gaming, I’m not at all concerned. If scholars (against all odds) are motivated to write more relevant articles, pen more books that might sell, and to be more aggressive about communicating findings and insights in a fashion that attracts interest from new and old media, I think that’s great. That’s not “gaming” the system, it’s just good public scholarship. As far as encouraging academics to work harder at communicating outside the academy–there’s obviously a tipping point where public engagement becomes sleazy P.R., but most scholars are so unbelievably far from that point that it’s not currently a substantial concern.

A final note. The rankings will feature 54 university-based edu-scholars who are widely regarded as having some public presence. However, this list is in no way exhaustive. There is a wealth of other faculty addressing public questions of education or education policy, and many of them may grade out quite highly on the Public Presence Rankings. The scores are nothing more than a cross-section of faculty, from various disciplines, institutions, generations, and areas of inquiry. For those interested in scoring additional scholars, it should be straightforward to do so using the metrics sketched above. Indeed, by design, anyone with an Internet connection should be able to generate a comparative rating for a given scholar in no more than 15-20 minutes. (At this end, for his careful work and invaluable advice on how to pull this together, I owe a big shout-out to my indefatigable and eagle-eyed research assistant, Daniel Lautzenheiser.)

Later we’ll run the rankings and you can make of the results what you will.

-Frederick Hess

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College