Glenn and Mark recently published a list of most-cited health law scholars, using the methods generally used for these studies in legal academia. Like any academic who steadfastly denigrates the importance of lists, I naturally checked right away to see where I ranked, which was somewhere so far down the list that only an outbeak of smallpox at the AALS meeting could ever get me into the top twenty. Since I was still completely uninterested in this whole ranking issue, my next move was to look at the methods. And this is where I did have a thought worth sharing.
The source of the data is the JRL library on Westlaw, which I believe primarily covers law reviews and other legal publications. How often one is cited in law reviews is certainly a good measure of impact within legal scholarsip, but it does not capture (or support) health law as an interdisciplinary field. Indeed, I think it is arguable that for many of us, our most important impact will be on research and scholarship in other fields. Does our top-twenty list look different if we draw on a broader database of citations?
I can’t tell you. That would be a lot of work. But there is a way to do it “collectively.” Google tracks citations that appear anywhere in the googleverse and reports them in Google Scholar — if you create a profile. Most of the people in the top 20 in the Hall-Cohen top 20 do not have Google Scholar Profiles, but a few do and the results suggest we might see some differences in impact ranking if we went beyond law reviews:
Name | Hall/Cohen cites (rank) | Google cites (rank) since 2012 |
Larry Gostin | 510 (1) | 7150 (1) |
I. Glenn Cohen | 320 (4) | 1143 (3) |
Frank Paquale | 300 (6) | 1081 (4) |
Lars Noah | 280 (9) | 586 (5) |
David Studdert | 190 (19) | 7129 (2) |
Everyone gets many more cites from Google than Westlaw, which reflects some methodological differences but also shows a lot of extra-legal impact. Larry Gostin is still on top, by quite a distance, but David Studdert — at the bottom of the law review top 20 — comes near to catching him. (I may as well admit that the Google ranking puts yours truly well above Cohen but nowhere near Studdent and Gostin.)
Why does this matter? The obvious point is the one I have already made: health law scholars should be aiming to make a difference in health policy, and that is not measured by law review citations alone. For us to thrive as a field, we need more than ever to be engaged with non-lawyers, as my colleagues and I argue for public health law here. Recognizing non-legal citations is also, in my experience, extremely important for supporting young scholars. If all we recognize and seem to value are law review citations, then junior scholars will only write law review articles. That is not how we build a field of engaged, cross-disciplinary scholars and researchers. I encourage junior scholars to create Google Scholar profiles and I use them when I am doing promotion and tenure reviews in this list-mad age.
One last point: Google Scholar profiles take about two minutes to create and a very minimal effort to curate (if you care to, you need to eliminate some dupes and misatributions). Whether you like rankings or transdisciplinary impact, you can help the field at minimal cost by signing up.
End of commercial.
If we really want to have an impact as health law scholars we should be testifying before legislators and cited by courts.
Thank Scott! I am in total agreement, one of the reasons why we flagged a similar issue in my original post with Mark H “In the context of health law, specifically, one additional limitation is worth emphasizing: this ranking is based on citations in legal periodicals (as defined by the Sisk-Leiter approach) but much of our field’s work is cited in medical, public health, bioethics, or other journals. Publications in those journals that are cited in legal periodicals are captured, but not the citation of our work in those journals.”
I (and in this I am pretty sure I can speak for Mark H too) would love to get you and others involved in figuring out whether alternative ranking structures (include H-index and alt-metrics) make more sense for our field than the Sisk-Leiter scores and if we do repeat this exercise in the future when new data becomes available, whether we might spread out some of the work and present rankings using a series of alternative measures or average of several measures.
I think it would help a lot to have a broader metric. The easy way is to encourage people in our field to create a scholar profile. Google then does all the work. Well, most of the work. Google sweeps up multiple versions (eg, the SSRN pre-print and the published version), so each scholar’s page might have to be cleaned or we would note that limitation. Google also occasionally misattributes. Then there are those issues about multi-authored pieces and self-citation, which would also require manual analysis. I frankly would be happy enough with just using the scholar rankings (and the H and i10 indices they create, warts and all.)