When I recently wrote and tweeted about the increasing ubiquity of the h-index, almost no-one rose to its defence. It turns out that we all seemed to know that the h-index has only two real purposes. The first is penalising one-hit wonders (some of them Nobelists and Fields medalists) relative to reliable sloggers; the second is putting a fig leaf on crass comparisons of citation totals. And if we are honest with ourselves, we all know that the h-index is largely useless for anything else.

Despite being a widely used measure of academic impact, the h-index tells you little more than the square root of a scientist's overall citation count. The math behind this was nailed down by Alexander Yong and the phenomenon had been described empirically in several places, including Michael Nielsen's 2008 blogpost, a paper by Redner in 2010 and an analysis of astrophysicists by Spruit in 2012. 

So my resolution for 2015 is this: I am going to go h-free, wherever possible. I won't use it in recommendations I write, I won't cite my own h-index in my annual performance appraisals, and I will discourage comparisons of h-indices when considering candidates for promotions, appointments and prizes.

Good scientists help set the direction for their fields, and that contribution is reflected in the citations they receive. But beyond the huge differences in collaboration styles and citation etiquette between subfields, there are many kinds of citations. A mention like "Following the work of Guth [REF 1]" means far more than being embedded in a long list like "Recent work includes [REF 2 - REF 27]", but citation counting treats both cases equally. Scientists are creative, and I am sure it is possible to design metrics that measure the worth of citations. But I am equally sure that these metrics won't be created just by adding epicycles to h. 


Postscript: I am grateful to Alexander Yong for pointing me to the previous discussions of the correlation between the h-index and citation counts. And, if I had to guess as to how a useful metric of scientific impact might be built, I would look to sophisticated analyses of citation graphs (in the technical sense) and machine-based natural language parsing, but both approaches are intrinsically more nuanced than the h-index.