The 2013 RHSU Edu-Scholar Public Presence Scoring Rubric

Tomorrow, I’ll be unveiling the 2013 Rick Hess Straight Up Edu-Scholar Public Presence rankings. Today, I want to run through the scoring rubric. The Edu-Scholar rankings employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limits the nuance and sophistication of the measures, but such is life.

There were a few modifications this year. The most significant were the addition of the “Klout score” and the decision to cap the total points available in each category, yielding a maximum possible score of 200. No one achieved a 200. Surveying this year’s results, a score around 100 would qualify for top ten status; an 85 would crack the top twenty; and a 60 sufficed to make it into the top fifty.

Scores are calculated as follows:

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, common way to measure the breadth and impact of a scholar’s work is to tally works in descending order of how often each is cited, and then identify the point at which the number of works is exceeded by the cite count for the least-frequently cited article. For instance, a scholar who had 10 works that were each cited at least 10 times, but whose 11th most-frequently cited work was cited just 9 times, would score a ten. A scholar who had 27 works cited at least 50 times, but whose 28th work was cited 27 times or fewer, would receive a 27. The measure reflects the fact that bodies of work matter; influencing what others think, and how issues are understood. An assistant professor will typically have a number in the low single digits, while veteran scholars may score a 40 or higher. The search was conducted on December 11, using the advanced search “author” filter in Google Scholar within the “Business, Administration, Finance, and Economics” and “Social Sciences, Arts, and Humanities” categories. A hand-search culled out works by other, similarly named, individuals. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50–if a scholar’s score exceeded that, they received a 50. This score offers a quick way to gauge both the expanse and influence of a scholar’s body of work.

Book Points: An author search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, a half-point for a co-authored book where they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g. “David Cohen” became “David K. Cohen,” and “Deborah Ball” became “Deborah Loewenberg Ball.”) The “format” searched “Printed Books” so as to avoid double-counting books that are also available as e-books. This obviously means that books released only as e-books are omitted. However, circa 2012, that seemed a modest price to avoid double-counting and reduce confusion (given that very few relevant books are, as yet, released solely as e-books; this is likely to change before long.) In each category, a hand-search sought to help ensure an accurate score. Amazon-available reports were excluded, as was any “out of print” volume; only published, currently available books were included. The search was conducted December 11-12. We capped this score at 25 (only five scholars reached this limit.) Most scholars scored between zero and 20.

Highest Amazon Ranking: The author’s highest-ranked book on Amazon, as of December 18-19 was used to calculate this score. The highest-ranked book’s “Amazon best sellers rank” was subtracted from 400,000, and that figure was divided by 20,000. This yielded a point total between zero and 20. This score, given the nature of Amazon’s ranking algorithm, is somewhat volatile and biased in favor of more recent works. For instance, a book may have been very influential a decade ago, and continue to impact citation counts and a scholar’s larger profile, but produce few or no ranking points this year. The results are decidedly imperfect, but convey real information about whether a scholar has penned a book that is shaping the conversation. About a third of the scholars examined, and sixteen of the top twenty, scored points in this category.

Education Press Mentions: This reflects the total number of times the scholar was quoted or mentioned in Education Week or the Chronicle of Higher Education between January 1 and December 13-14. The search was conducted using each scholar’s first and last name. The number of appearances was divided by 2 to calculate Ed Press points. Ed Press points were capped at 30 (only one scholar reached this limit), though most scholars scored between zero and ten.

Blog Mentions: This reflects the number of times a scholar was quoted, mentioned, or otherwise discussed in blogs between January 1 and December 27. The search employed Google Blogs. The search terms were each scholar’s name and affiliation (e.g. “Bill Smith” and “Rutgers,” not “Rutgers University” – the point of this is that bloggers often mention their affiliation without the full name.) Using university affiliation serves a dual purpose: it avoided confusion due to common names and ensured that scores aren’t padded by a scholar’s own posts (which generally don’t include affiliation). If bloggers are provoking discussion, the figures will reflect that. If a scholar is mentioned sans affiliation, that mention is omitted here. (If anything, this may tamp down the scores of well-known scholars for whom university affiliation may seem unnecessary. However, since the Darling-Hammonds, Ravitches, and Hanusheks still fare just fine, I’m good with that.) Because blogging is often informal, the search also included common diminutives (e.g., “Rick Hanushek” as well as “Eric Hanushek”). Points were calculated by dividing total mentions by four. Scores were capped at 30. Eight scholars hit the 30 point cap, while the vast majority scored between zero and 20.

Newspaper Mentions: A Lexis Nexis Academic search was used to determine the number of times a scholar was quoted or mentioned in English language newspapers between January 1 and December 26-27. As with Blog Mentions, the search was conducted using each scholar’s name and affiliation. Points were calculated by dividing the total number of mentions by two. Scores were capped at 30. Six scholars hit the limit, while most scored between zero and 10.

Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2012 to determine whether a scholar had testified or if their work was referenced by a member of Congress. The tally was conducted on December 14. Qualifying scholars received five points.

Klout Score: A Twitter search determined whether a given scholar had a Twitter profile, with a hand search ruling out similarly named individuals. The score was then based on a scholar’s Klout score as of December 14. The Klout score is a number between 0 and 100 that reflects how often an individual is retweeted, mentioned, followed, listed, and answered. That total was divided by 10 to calculate the score, yielding a maximum score of 10. If a scholar was on Twitter but did not have a Klout score, then they scored a zero. Most Klout scores ranged from zero to five.

The scoring is designed to recognize multidimensional scholars, with due respect for a broadly influential body of work. The metrics discount publications that have not been much cited, or books that are no longer read or are out of print. The biggest change this year, capping each category, ensures that scholars who run up the score in a single category (like Google Scholar or Book Points) are forced to be impactful in other categories if they’re to really crack the top 20.

There are clearly lots of provisos in making sense of the results. Different disciplines approach books and articles differently. Scholars of K-12 and higher education may have different opportunities to engage in the public square. Senior scholars have obviously had more opportunity to build a substantial body of work and thus their influence (which is why the results unapologetically favor sustained accomplishment).

Moreover, readers may have more use for some categories than for others. That’s fine. The whole point is to encourage discussion and debate about the nature of responsible public engagement: who’s doing a particularly good job of it, how much these things matter, and how to accurately measure a scholar’s contribution.

Two questions sure to arise: Can somebody game this rubric? And am I concerned that this exercise will encourage academics to chase publicity? As for gaming, I’m not at all concerned. If scholars (against all odds) are motivated to write more relevant articles, pen more books that might sell, or be more aggressive about communicating their ideas and research in an accessible fashion, I think that’s great. That’s not “gaming,” it’s just good public scholarship. As for academics working harder to communicate beyond the academy–well, there’s obviously a point where public engagement becomes sleazy PR… but most academics are so immensely far from that point that I’m not unduly concerned.

A final note. Tomorrow’s rankings will feature 168 university-based edu-scholars who are widely regarded as having some public presence. However, this list is not intended to be exhaustive. There are many other faculty tackling education or education policy. Wednesday’s scores are for a prominent cross-section of faculty, from various disciplines, institutions, generations, and areas of inquiry, but they are not comprehensive. For those interested in scoring additional scholars, it should be straightforward to do so using the scoring rubric. Indeed, the exercise was designed so that anyone with an Internet connection can generate a comparative rating for a given scholar in no more than 15-20 minutes. (At this end, for her assiduous labor and invaluable advice, I owe a big shout-out to my indefatigable and eagle-eyed research assistant, Allie Kimmel, and to her talented colleague Chelsea Straus.)

– Rick Hess

This blog entry also appears on Rick Hess Straight Up.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College