As of Friday morning, the LHC is accelerating protons to an energy three-and-a-half times higher than the previous record. Within a couple of weeks we expect a steady stream of head-on proton-proton collisions at this energy, allowing us to search for new particles, forces and dimensions in completely new territory.
The LHC really became a collider just before Christmas. The collisions recorded back then were not at particularly high energy, but three experiments have now published results; my experiment, ATLAS, being the latest, with the paper becoming available on Tuesday. Before storming onward, it’s a good moment to see what these results actually tell us.
The detectors (ALICE, CMS, ATLAS in this case) are basically huge digital cameras designed to record what happens when protons smash together. The first thing you do with a new collider and detector is measure the particles produced in a typical collision.
These measurements tell us various things. We know the proton is full of quarks, stuck together by the strong nuclear force. How it behaves at high energies is not very well known, and these measurements will help. They also help us understand backgrounds to rarer events (e.g. those where a Higgs might be produced) and inform models of the massive air-showers, which happen when cosmic rays hits the upper atmosphere. You can see in the ATLAS paper that the models don’t get the data quite right. The model builders are already tuning up to improve this.
The papers all have strengths. The ALICE paper was first, although they did not wait for all the data so it’s less precise than the others. The CMS paper was next, and is the only one to include the highest energy collisions so far – at 2.36 TeV these are the highest energy ever.
The ATLAS paper was significantly slower to appear – I really hope CMS don’t beat us to the Higgs that way! The main strength of the ATLAS paper is a bit subtle but very important.To understand it you have to go back and ask – what is a collision?
What is a Collision?
It’s when two protons interact. But most of the interactions are just glancing blows. In most of these collisions the protons aren’t even smashed up; or they are broken into very few particles. They are called “diffractive” collisions (and interesting for several technical reasons, as my more famous colleague informed the Guardian).
This matters because we’re trying to measure the average particle distributions in so-called “minimum bias” events. On the face of it this means you try for an unbiased selection of collisions. But you can’t possibly be truly unbiased; the vast majority of glancing collisions don’t leave any trace in the detector – the protons just zip on down the LHC beampipe. In practice you’ll see most of the “non-diffractive” events – where the protons are smashed up and the bits hit the detector – and only see a few of the diffractive events. Historically, experiments have used theoretical models either to remove the small remaining diffractive contamination, and produce measurements of what they call non-diffractive events, or add in the missing diffractive events which they couldn’t see.
This means that what you are measuring is only defined within a theory. “Diffractive” and “non-diffractive” are really just words. If you use a model to correct for them, or model them in regions where you have no acceptance, you buy into a particular definition of them, and hence a particular view of nature. You are no longer just reporting what happens. It is very important that, having gone to the enormous trouble of building the LHC and the detectors, we first just say what happens. The next step is of course to confront that data with theory as part of the process of exploration and understanding. But the first, reporting, step is essential.
Instead of measuring the particle distributions in “non-diffractive” events, or averaging over all events, it measures them for all events which have at least one track in a give region, regardless of whether a given model would call the event “diffractive” or not. This is a physical criterion which can be reproduced regardless of any model. The difference is significant (up to about 20%). But the difference in principle is huge, and comparisons to models become much less ambiguous. In my view this approach should be carried forward into all future measurements at the LHC.
Anyway, the first results are in. The detectors work well. The theoretical models are reasonable, but can be improved. It’s not the really exciting new physics yet, but it’s interesting, and important for firming up our knowledge as we step into the unknown. And as we go there, we should all be sure to first report what we see, and only then have a go at interpreting it.
[29/3/2010 An article based on this, benefiting from Ian Sample's excellent editorial help, has appeared now on the Guardian site.]
Notes and further reading for physicists:
The CMS paper has another strength – they measure tracks right down to 100 MeV, whereas ATLAS stops at 500 MeV.
The amount of work required to make these measurements is huge, and as a member of ATLAS I’d like to acknowledge the great efforts of those concerned in ATLAS and on the LHC; similar efforts of course were required in CMS and ALICE.
Peter Skands gave a great presentation on the minimum bias data here.
Actually in the papers you’ll see that the most common measurement is “non-single-diffractive”, which typically means a sample with the elastic and single-diffractive events removed. ALICE and some earlier papers also measure “Inelastic” samples, which is physically well defined, but involves correcting for a large cross-section outside the detector acceptance.
A good criteria for knowing whether your measurement is defined in a model-independent fashion is can you write a rivet routine for it. Seriously.