The great Russian physicist, Lev Landau used to rank physicists on a scale from 0 to 5. The better you were, the smaller your number. Newton alone was a 0, Einstein scraped in at 0.5, and founders of quantum mechanics like Bohr and Planck were 1s. Landau rated himself a 2.5 which he bumped up to a 2 after winning the Nobel Prize. Grade inflation was not a concern with Landau -- for him, the vast majority of professional physicists were mundane 5s.

Landau constructed his list for his personal enjoyment, but scientists must frequently judge their colleagues: for hiring, promotions, awarding prizes and fellowships, parceling out grants, and ofr "Research Assessments" of one sort or another. And since scientists are scientists many algorithmic ranking schemes have been proposed, spinning off the sub-discipline of bibliometrics.

The basic coin of science is the published paper. These are easy to count, but that doesn't help much on its own: the brutal truth is that many research papers are of little interest to anyone but their author and possibly the author's proud parents. The next level is tallying citations: how often one article is referred to by other articles. Good scientists have an impact on the work of other scientists and citations reflect that influence. But there are caveats: good papers may be overlooked, while papers which are annoyingly wrong gather multiple mentions in articles refuting them. And not all citations are equal: you can spend a career churning out "fast followups", looking busy while doing little to advance the field.

Beyond counting papers and citations, the h-index, proposed by Jorge Hirsch is 2005, is by far the most popular ranking tool. The h-index reduces the output of a scientist (or several scientists) to a single number: "h" is the number of papers you have written with h or more citations. Clearly, the bigger your h, the better you look. 

In the decade following its invention the h-index has become ubiquitous and Google Scholar and major academic databases will compute it for you. The h-index is simple and yet feels "science-y', so it is tempting to use it as a stand-in for nuanced judgement. 

However, the h-index is only meaningful if it contains information you couldn't learn from just counting citations and papers. Right off the bat, your h-index can't be larger than the total number of papers you have written. And it must be smaller than the square root of your citation count, since it says you have h papers, each with at least h citations. But you can get fancier. Alexander Yong at UIUC did a lovely analysis of the h-index, deriving a rule of thumb that says an h-index is likely to be around half of the square root of the total citation count. (Yong models the h-index via partitions, the number of different ways a number can be chopped into sums of integers - it is a lovely piece of work.)  

Yong's plots his guesstimated h-index against the actual value for a group of mathematicians, and the match is pretty good:

From Yong, Bulletin of the AMS 61 1040 (2014) Used with permission.

From Yong, Bulletin of the AMS 61 1040 (2014) Used with permission.

Ranking scientists by their citation count seems simplistic -- if we were happy doing that there would have no real need to invent the h-index. However, if the h-index is pretty close to the square root of the citation count, comparing h-indices is not too different from comparing citation counts and we are back to square one. 

The h-index has been on my mind because I seem to be seeing it more and more frequently on academic CVs. Given that the h-index is easily computed from public data there is no profound reason not to provide it yourself, but the other day I found myself asking on Twitter

It was an offhand comment, but it prompted a discussion that lasted most of the weekend. (The conversation made several points I hadn't considered -- in particular, some databases are more thorough than others (and their coverage varies by subfield) and Clare Burrage pointed out that providing it yourself lets you put your best foot forward.)

In some professions and countries it is common to include a photo on your CV, although this practice is fortunately rare in the sciences. That said, most scientists have a significant Google footprint and you will be googled, so when you are ranking applicants you are likely to see their photos. Like photos, h-indices are easy to find, but my feeling is neither photos nor h-indices have any place on a CV. The h-index is a very blunt instrument and by volunteering your h-index you are tacitly acceding to its use in the hiring process. 

And if we are using h-indices to rank colleagues and potential colleagues perhaps we should also think again, and make sure that we understand what the h-index is actually measuring. 


Postscript: The corollary of Yong's result is that the h-index is most interesting when the rule of thumb fails. The h-index was designed to penalise people with a single highly cited paper, but a one-hit wonders can still win the Nobel Prize -- for example, Peter Higgs of the Higgs boson was quiescent for vast majority of his career, and his h-index is smaller than that of some PhD students.