See also Chapter 3.6 of Smashing Physics.
I noticed the other day that suddenly more people were looking at my blog. So it goes – Wordpress give you lots of cool graphs to obsess over. However, that particular demon is a mere spotty imp compared to the citation-count ogre.
A citation is when one scientific publication refers to another. Counting how often your papers are cited is one way to see how much influence your work is having, and it’s very tempting to keep watching them.
In particle physics this is trickier that some other fields. My most highy-cited papers are measurements of proton structure from ZEUS. These were important measurements, and I helped build and run the experiment, but I made no direct input to those papers. This is common practice in particle physics, for good reasons which I alluded to here. But still I have my babies, the papers in which I recognize my own words, plots and ideas, as well as the results of my experiment.
Even amongst these, the current top two are funny ones.
The top “paper” which I actually partly wrote and edited contains no real data and no original theoretical ideas, and is not even published in a journal. It’s an 1852-page tome containing preparatory studies for using the ATLAS detector. It is useful, and the fact that it is cited a lot shows the level of interest in ATLAS, so it’s fair in that sense.
Next down is a real paper from ZEUS. We reported the mass we measured when two types of particles produced in our collisions were combined; neutral kaons and protons. We did this because some other experiments had seen a bump in the neutron-charged kaon mass distribution, which might have been the first observation of a hadron made of five quarks. In the Standard Model, all the hadrons we know of are made of either one quark and one antiquark, or three quarks. If it really was a five quark thing – a pentaquark – this would mean
- big physics news, and
- there should be a similar bump in our mass distribution.
We indeed saw a bump, though it was not completely compelling statistically and not necessarily in exactly the right place. Anyway, we did our job, we reported what we saw, and this was during a flurry of excitement so we got cited lots. Sadly it looks like the pentaquark thing was a false alarm, though our bump may have been real, but something else less interesting. Anyway.
Further down are lots of papers I’m very pleased with which I may bang on about in future (I already did about one of them).
This illustrates how dangerous citation counts can be as an indicator of merit. I’d happily lose the top two papers before most of the next ten, because the next ten contain more data or more original ideas. They advance knowledge more.
I went to a talk at UCL by Andrew Gregory last week, where I was surprised to hear that the idea that the planets orbit the Sun dates back at least to ancient Greece (Aristarchus). In fact I am now reading Simon Singh’s “Big Bang“, which points out that even Copernicus’s work was ignored for many years. Copernicus and Aristarchus would have struggled for tenure and grants based on citation counts during their lifetimes.
Before reading the Big Bang (and with Terry Pratchett‘s Nation in between – renaissance geek that I am) I read Lee Smolin’s “sɔısʎɥd ɥʇıʍ ǝןqnoɹʇ ǝɥʇ“. This describes the failure of string theory to come up with verifiable (or even falsifiable) breakthroughs in our understanding of the universe. This is despite decades of effort and investment to the extent that string theorists are the dominant grouping of theoretical physicists in terms of faculty positions in the US, apparently. I’m no theorist, I have no direct way to judge many of Smolin’s claims. But even I can see that because of the size of this community, mutual citation means that a good string theory paper will get cited lots more than a good paper on some other perhaps more promising but less well-followed tack. And citations make careers. Who knows how many breakthroughs have been lost through this feedback effect?
None of this is to say citation counts (or string theory!) are worthless, and of course good scientists cite good work. The counts are useful, used carefully. But in the short term they can underplay breakthroughs, which might take years to create a new field of people around them to provide citations. I suppose it may be better to work in a patent office for a while if the alternative is to redirect your research in a citation-hunt to secure grants and be appointed to a faculty position.
Oh, and the jump in my blog traffic was due to a very perceptive choice by the Guardian’s Alok Jha. And, so far, a large majority of the visitors only looked at the front page and didn’t read single complete post. Thanks for making it this far 🙂