The 2018 RHSU Edu-Scholar Public Influence Scoring Rubric

Tomorrow, I’ll be unveiling the 2018 RHSU Edu-Scholar Public Influence Rankings, honoring the 200 university-based scholars who had the biggest influence on educational practice and policy last year. Today, I want to run through the methodology used to generate those rankings.

Given that there are well over 20,000 university-based faculty in the U.S. who are tackling education, simply making the Edu-Scholar list should be deemed an accomplishment in its own right. So, who made the list? Eligible are university-based scholars who focus primarily on educational questions (“university-based” meaning a formal university affiliation). The rankings include the top 150 finishers from last year, augmented by 50 “at-large” additions named by a selection committee of 31 disciplinarily and intellectually diverse scholars. The selection committee (composed of members already assured a bid by dint of finishing in last year’s top 150) nominated and then selected the final 50. (In those cases where an automatic qualifier is no longer affiliated with an American university, typically due to retirement, the committee selects additional “at-large” names.)

I’m indebted to the committee members for their assistance and would like to take a moment to acknowledge the members of the 2018 RHSU Selection Committee. They are: Deborah Ball (U. Michigan), Camilla Benbow (Vanderbilt), Linda Darling-Hammond (Stanford), Susan Dynarski (U. Michigan), Susan Fuhrman (Columbia), Dan Goldhaber (U. Washington), Sara Goldrick-Rab (Temple), Jay Greene (U. Arkansas), Eric Hanushek (Stanford), Andy Hargreaves (Boston College), Shaun Harper (USC), Doug Harris (Tulane), Jeff Henig (Columbia), Tom Kane (Harvard), Sunny Ladd (Duke), Gloria Ladson-Billings (U. Wisconsin), Marc Lamont Hill (Morehouse), Susanna Loeb (Stanford), Pedro Noguera (UCLA), Robert Pianta (U. Virginia), Jonathan Plucker (Johns Hopkins), Morgan Polikoff (USC), Stephen Raudenbush (U. Chicago), Jim Ryan (Harvard), Marcelo Suarez-Orozco (UCLA), Jacob Vigdor (U. Washington), Kevin Welner (CU Boulder), Marty West (Harvard), Daniel Willingham (U. Virginia), Yong Zhao (Kansas), and Jonathan Zimmerman (U. Penn).

Okay, so that’s how the list was compiled. How were the scholars ranked? Each scholar was scored in nine categories, yielding a maximum possible score of 200—although only a handful of scholars actually cracked 100.

Scores are calculated as follows:

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, common way to measure the breadth and impact of a scholar’s work is to tally works in descending order of how often each is cited, and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known in the field as a scholar’s “h-index.”) For instance, a scholar who had 20 works that were each cited at least 20 times, but whose 21st most-frequently cited work was cited just 10 times, would score a 20. The measure recognizes that bodies of scholarship matter greatly for influencing how important questions are understood and discussed. The search was conducted using the advanced search “author” filter in Google Scholar. A hand search culled out works by other, similarly named, individuals. For those scholars who have created a Google Scholar account, their h-index was available at a glance. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50. This measure offers a quick way to gauge the expanse and influence of a scholar’s work. (This search was conducted on December 13.)

Book Points: An author search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a coauthored book in which they were the lead author, a half-point for coauthored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., “David Cohen” became “David K. Cohen.”) We only searched for “Printed Books” (one of several searchable formats) so as to avoid double-counting books which are also available as e-books. This obviously means that books released only as e-books are omitted. However, as matters stand, few relevant books are released solely as e-books (this will likely change before long, but we’ll cross that bridge when we come to it). “Out of print” volumes were excluded, as were reports, commissioned studies, and special editions of magazines or journals. This measure reflects the conviction that books can influence public discussion in an outsized fashion. Book points were capped at 20. (This search was conducted on December 14.)

Highest Amazon Ranking: This reflects the author’s highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000 and the result was divided by 20,000 to yield a maximum score of 20. The nature of Amazon’s ranking algorithm means that this score can be volatile and favors more recent sales. For instance, a book may have been very influential a decade ago and continue to influence citation counts and a scholar’s visibility but no longer sell many copies. Such a book will typically have a low Amazon ranking. The result is an imperfect measure, but one that conveys real information about whether a scholar has penned a book that is influencing contemporary discussion. (This search was conducted on December 15.)

Syllabus Points: This seeks to measure long-term academic impact on what is being read by the rising generation of university students. A search of “OpenSyllabusProject.org” (a website which collects over one million syllabi from across American, British, Canadian, and Australian universities) was used to gauge how widely used were the works of various authors. A search of the “Open Syllabus Explorer,” using the scholar’s name, was used to identify their top-ranked text. The score reflects the number of times that text appeared on syllabi, with the tally then divided by 5. The score was capped at 10 points. (This search was conducted on December 14.)

Education Press Mentions: This measures the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2017. Searches were conducted using each scholar’s first and last name. If applicable, we also searched names using a common diminutive and both with and without middle initials. In each instance, the highest result was recorded. The number of appearances in the Chronicle and Inside Higher Ed were averaged and that number was added to the number of times a scholar appeared in Education Week. (This was done to give equal weight to K-12 and higher education.) The resulting figure was multiplied by two, with total Ed Press points then capped at 30. (This search was conducted on December 15.)

Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2017. The intent is to use a “wisdom of crowds” metric to gauge a scholar’s influence on the public discourse last year. The search was conducted using Google. The search terms were each scholar’s name and university affiliation (e.g., “Bill Smith” and “Rutgers University”). Using affiliation served a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to their university-affiliated role, rather than their activity in some other capacity. If a scholar was mentioned sans affiliation, that mention was omitted. As with the Education Press category, searches included common diminutives and were run with and without middle initials. For each scholar, we used the single highest score from among these various configurations. (We didn’t sum them, as that produces complications and potential duplication.) Points were calculated by dividing total mentions by 30. Scores were capped at 25. (This search was conducted on December 15.)

Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar’s name and affiliation, diminutives, and were run with and without middle initials. In each instance, the highest result was recorded. Points were calculated by dividing the total number of mentions by two, and were capped at 30. (The search was conducted on December 15.)

Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2017 to determine whether a scholar had testified or if their work was referenced by a member of Congress. Qualifying scholars received five points. (This search was conducted on December 15.)

Klout Score: We first determined whether a given scholar had a Twitter account, with a hand search ruling out similarly named individuals. For scholars who did, we then obtained their Klout score. Klout is a number between zero and 100 that reflects online presence and influence across several information-sharing platforms. The Klout score was divided by 10, yielding a maximum score of 10. (This search was conducted on December 14.)

Scores are designed to acknowledge scholars who are actively engaged in public discourse and whose work has an impact on practice and policy. That’s why the scoring discounts, for instance, rarely cited academic publications or books that are unread or out of print. Generally speaking, the scholars who rank highest are those who are both influential researchers and influential public voices.

There are obviously lots of provisos when perusing the results. Different disciplines approach books and articles differently. Senior scholars have had more opportunity to build a substantial body of work and influence (and the results unapologetically favor sustained accomplishment). And readers may care more for some categories than others. That’s all well and good. The whole point is to spur discussion about the nature of constructive public influence: who’s doing it, how valuable it is, and how to gauge a scholar’s contribution.

A couple of notes regarding questions that come up annually. First, there are some academics that dabble (quite successfully) in education, but for whom education is only a sideline. Such individuals are not eligible (I’m sure they’ll understand.) For a scholar to be included, education must constitute a substantial slice of their scholarship. This policy helps ensure that the rankings serve as something of an apples-to-apples comparison. Otherwise, Nobel laureates who’ve dabbled in education would play havoc with the rankings. Second, scholars sometimes change institutions in the course of a year. My policy is straightforward: For the categories where affiliation is used, the searches are conducted using a scholar’s year-end affiliation. The alternative creates concerns about double-counting and places an undue burden on my RAs. So, scholars get dinged a bit in the year which they move. But that’s life.

Tomorrow’s list represents obviously only a sliver of the faculty across the nation who are tackling education or education policy. For those interested in scoring additional scholars, it’s a straightforward task to do so using the scoring rubric. Indeed, the exercise was designed so that anyone can generate a comparable rating for a given scholar in a half-hour or less.

And a final note of thanks: For the hard work of coordinating the selection committee, finalizing the 2018 list, and then spending dozens of hours crunching and double-checking all of this data for 200 scholars, I owe a big shout-out to my gifted, diligent, and wholly remarkable research assistants Amy Cummings, Grant Addison, and Sofia Gallo.

— Frederick Hess

Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.

This post originally appeared on Rick Hess Straight Up.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College