Safety First?

Material from this post features in Chapter 2.4 of Smashing Physics.

Hopefully this is a dead issue for most people and the last thing I want to do is reignite a debate, but there are still people out there who should know better (you know who you are), still frightening people who don’t know better.

So: imagine doing something new, for the first time. Say a new experiment. Say, oh I don’t know, the LHC. Or RHIC, or the Tevatron, or one of the previous machines some people have got agitated about. Then take the worst consequence imaginable, even if it is in contradiction to all experimental evidence, theory and even logic. Science has a hard time proving a negative, so you might conclude that there is an infinitesimally small chance of the bad thing happening and be inclined not to go ahead. But before deciding you have a duty to consider also the risk of not doing it.

Scenario 1

The year is 2125 AD and the Earth has a serious problem. A rogue planet, drifting through the spiral arm of our galaxy, has just been detected via several innovative astronomical techniques involving gravitational wave detectors and observations from our deep-space telescope system. The planet is on course to enter the solar system and is massive enough to seriously disrupt the orbits of the planets. Many-body quantum gravity calculations, using the detailed knowledge we have accumulated of the masses and trajectories of our “local” environment indicate a near certainty that one result of this disruption will be to send the Earth crashing into the Sun within two decades.

Fortunately these observations and calculations have given the inhabitants of Earth enough warning. Using the latest antimatter fuel cells, a small robot ship is sent to the planet. Once on the planet, nanobots assemble a mini-black-hole factory which is used provide a small but steady and powerful warp drive, diverting the planet away from the solar system into a cosmological near-miss. Party time.

Scenario 2

The year is 2145 AD and the Earth has a serious problem. There’s no one to help the Earth because its most intelligent species wiped itself out in a nuclear armaggedon/global climate catastrophe/whatever the next one is.  The Earth mostly bounced back from this and is still a marvellous cradle of life, until a rogue planet sends it spiralling into the Sun in the mother of all environmental catastrophes.

Scenario 3

The year is 2135 AD and the Earth has a serious problem. A rogue planet, drifting through the spiral arm of our galaxy, has just been detected. The planet is on course to enter the solar system and is massive enough to seriously disrupt the orbits of the planets. Calculations using all the knowledge we have of the masses and trajectories of our “local” environment indicate a possibility that one result of this disruption will be to send the Earth crashing into the Sun within a few years.

Unfortunately there’s not a lot we can do about it. The warning came a bit late, and anyway we still do not even know whether there is a Higgs or extra dimensions and mini-black holes, so we have no suitable power sources to get us to the planet and no way of dealing with the threat even if we did. We spend five years moping about the foolishness of the “safety first” legal ruling with made us turn off the LHC back in 2010, as well as the similar rulings which followed on new experimentation across many fields of physical and life sciences. Then we crash into the Sun.

The Risk of Inaction

The above scenarios are of course three amongst infinitely many tiny possibilities. However, anyone who advocates stopping research because of imagined doomsday scenarios should also be made to estimate the risks associated with stopping the research, and the doomsday scenarios we might thus be exposed to. Otherwise they might be suspected of being a twatTM.

Oh, and anyone who thinks they can pick research winners in terms of “impact”, or thinks that trying to understand how the universe works is navel gazing might also want to think about it.

[Note added 4/4/2010: Al Jazeera still give them airtime *sigh*. Although the phrase “enough rope” springs to mind.]

[Note added 5/4/2010 (but should have been added earlier!): This post helped inspire this Discovery Science article by Ian O’Neill.]



About Jon Butterworth

UCL Physics Prof working on LHC, dad, dodgy guitarist, Man City fan in exile.
This entry was posted in Particle Physics, Philosophy, Physics, Science, Science Policy and tagged , , . Bookmark the permalink.

6 Responses to Safety First?

  1. jonolan says:

    Yes, but it’s well proven by history that scientists are not to be trusted in these matters. Oppenheimer and his boys estimated that their atomic bombs had a 2% of causing a runaway chain reaction that would incinerate our atmosphere and destroy all terrestrial life, but they set it off anyway…

    So what are the odds involving the LHC….?

  2. Some experiments are risky. I agree with Butterworth that shutting down science is also a risk, and should be put in the balance. Some risks may be worth taking because they are overwhelmed by the potential benefits of the risky experiment. But proceeding with the experiment does not always win this balance. Some risks are not worth taking. Any but a miniscule probability of destroying Earth seems not worth taking, unless there is balancing probability of saving Earth. Also science ethicists agree that we should not harm experimental subjects etc. Most relevant probabilities are subjective, so finding a balance is partly a judgment call. But it does not help judgment if some folks have their finger on the scale. I suggest this is true of collider advocates. They said that black holes required energy beyond reach of any collider, then new physics predicted production of black holes by colliders (if the relevant theories are true.) They said that black holes would dissipate via Hawking radiation, despite papers that questioned the fundamental theory behind Hawking radiation. They offered an analogy between cosmic rays and colliders, but that analogy is inadequate, and has been accepted as such in recent papers by collider advocates. The neutron star argument by Mangano is good, but is questioned by several scientists. Pointing these things out is a contribution to finding balance. Butterworth is premature in implying that the issue is solved.

    Few collider advocates have argued Butterworth’s point. They didn’t think they needed to. The official position of CERN is that the risk is zero. It is not necessary to balance a zero risk. If collider advocates had made a successful attempt to justify their case via balance, I would not be a collider critic.

    Finding a balance will not be easy. I conceptualize the risk as the sum of all possible theories that enable trouble, weighted by their plausibility, divided by the sum of all possible theories about relevant outcomes, weighted by their plausibility. To find a balance we need to subtract that from a similar calculation about benefits. Parts of this calculation are obviously impossible, others highly subjective. However, we can roughly estimate this calculation, and doing so does tell us some things. One thing it tells us is that CERN’s policy that its employees say that the risk of zero is not correct.

    If finding balance is not easy, it at least requires appropriate procedures and appropriate review. A rubber stamp review by CERN employees is far from risk management best practices. (But I have to give credit to Mangano; he did a fairly good job.)

  3. Hi both,

    Thanks for the comments.

    First, I do think the risk is zero, I think the arguments are sound. But of course jonolan won’t trust me, and anyway in science one has to trust data not people.

    My point here was that since zero may be impossible to improve to everyone’s satisfaction, perhaps people can accept that “background level” of very very low risks is greater than anything you can make up on the other side even in your worst nightmares. This background includes especially the risk of not doing the experiment of course.

    People (including some vulnerable people, e.g. children) are poor at understanding risks, and the media are often poor at reporting them. So we all have to be rather sensitive.


  4. paul says:

    of course jon you are just a believer. you know perfectly Hawking’s evaporation is not proved and breaks all the laws of physics, from thermodynamics (hot black holes getting hotter in a cold enviroment, information missed) to relativity (size doesnt matter is relative, so BH dont follow quantum laws). Now the point of risking life for beliefs is that you can risk yours. When you risk the life of everyhuman being you are a ‘trou noir’, word that in french means black hole and as*hole.

  5. Tempted to delete the last comment since it doesn’t address anything I wrote in the post, but I guess it is worth leaving there for illustrative purposes.

  6. Cormac says:

    I’m no scientist here, but I can’t help but agree with Jon. The dangers of not having technology in the future far outweigh the dangers of the experiments themselves. What if fleming thought penicillium might be a deadly new super fungi (just saying, even though how unlikely this is) and destroyed it instead of testing and finding out about penicillin? All the people saved by these mediacl advances would be dead. What if a scared caveman decided not to touch the glowing stick that had been struck by lightning in front of him? Where would we be then? Just food for thought.

Comments are closed.