Asteroids and comets as space weapons

A video version of this is available here.

Introduction

Approximately 66 million years ago, a 10 km sized body struck Earth, and was likely one of the main contributors to the extinction of many species at the time. Bodies the size of 5 km or larger impact Earth on average every 20 million years (one might say we are overdue for one, but then one wouldn’t understand statistics). Asteroids 1 km or larger impact Earth every 500,000 years on average. Smaller bodies which can still do considerable local damage occur much more frequently (10 m wide bodies impact Earth on average every 10 years). It seems reasonable to say that only the first category (>~5 km) pose an existential threat, however many others pose major catastrophic threats*.

Given the likelihood of an asteroid impact (I use the word asteroid instead of asteroid and/or comet from here for sake of brevity), some argue that further improving detection and deflection technology are critical. Matheny (2007) estimates that, even if asteroid extinction events are improbable, due to the loss of future human generations if one were to occur, asteroid detection/deflection research and development could save a human life-year for $2.50 (US). Asteroid impact mitigation is not thought to be the most pressing existential threat (e.g. artificial intelligence or global pandemics), and yet it already seems to have better return on investment than the best now-centric human charities (though not non-human charities – I am largely ignoring non-humans here for simplicity and sake of argument).

The purpose of this article is to explore a depressing cautionary note in the field of asteroid impact mitigation. As we improve our ability to detect and (especially) deflect asteroids with an Earth-intersecting orbit away from Earth, we also improve our ability to deflect asteroids without an Earth-intersecting orbit in to Earth. This idea was first explored by Steven Ostro and Carl Sagan, and I will summarise their argument below.

Asteroid deflection as a DURC

A dual use research of concern (DURC) refers to research in the life sciences that, while intended for public benefit, could also be repurposed to cause public harm. One prominent example is that of disease and contagion research (can improve disease control, but can also be used to spread disease more effectively, either accidentally or maliciously). I will argue here that DURC can and should be applicable to any technology that has a potential dual use such as this.

Ostro and Sagan (1998) proposed that asteroid impacts could act as a double edged explanation for the Fermi paradox (why don’t we see any evidence of extraterrestrial civilisations?). The argument goes as follows: Those species that don’t develop asteroid deflection technology eventually go extinct due to some large impact, while those that do eventually go extinct because they accidentally or maliciously deflect a large asteroid into their planet. This has since been termed the ‘deflection dilemma‘.

The question arises: does the likelihood of a large impact increase as asteroid deflection technology is developed, rather than decrease? The most pressing existential and catastrophic threats today seem to be those that were created by technology (artificial intelligence, nuclear weapons, global health pandemics, anthropogenic global warming) rather than natural events (asteroid impacts, supervolcanoes, gamma ray bursts). Humanity has survived for millions of years (depending on how you define humanity), yet in the last 70 years have seen the advent of nuclear weapons and other technology that could meaningfully cause a catastrophic at any time. It seems possible therefore that the bigger risk will be that caused by technology, not the natural risk.

Ostro and Sagan (1994) argue that development of asteroid deflection technology is at the time of writing (and presumably today) premature, given the track record of global politics.

Who would maliciously deflect an asteroid?

Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes (see this now scrapped proposal to bring a small asteroid in to Lunar orbit), there are two categories of actors that might maliciously deflect such a body; state actors and terrorist groups.

A state actor might be incentivised to authorise an asteroid strike on an enemy or potential enemy in situations where they wouldn’t necessarily authorise a nuclear strike or conventional invasion. For example, let us consider an asteroid of around 20 m in diameter. Near Earth orbit asteroids of around this size are often only detected several hours or days before passing between Earth and the Moon. If a state actor is able to identify an asteroid that will pass near Earth in secret before the global community has, they can feasibly send a mission to alter its orbit to intersect with Earth in a way such that it would not be detected until it is much too late. Assuming the state actor did its job well enough, it would be impossible for anyone to lay blame on them, let alone even guess that it might have been caused by malicious intent.

An asteroid of this size would be expected to have enough energy to cause an explosion 30 times the strength of the nuclear bomb dropped over Hiroshima in WWII.

We can temper the likelihood of this scenario by speculating that it is unlikely for some state actor to covertly discover a new asteroid and track its orbit without any other actor discovering it, considering there are transparent organisations working on tracking them. However, is it possible that a government organisation (e.g. NASA) could be ordered to not share information about a new asteroid?

What to do about this problem

Even if we don’t directly develop asteroid deflection technology, as other technologies progress (e.g. launching payloads becomes cheaper, propulsion systems become more efficient), it will become easier over time anyway. Other space weapons, such as anti-satellite weapons (direct ascent kinetic kill projectiles or directed energy weapons), space stored nuclear weapons, and kinetic bombardment (rods from god) will all become easier with general improvements in relevant technology.

The question arises – even if a small group of people were to decide that developing asteroid deflection technology causes more harm than good, what can they meaningfully do about it? The idea that developing asteroid deflection technology is good is so entrenched in popular opinion that it seems like arguing for less or no spending in the area might be a bad idea. This seems like a similar situation to where AI safety researchers find themselves. Advocating for less funding and development of AI seems relatively intractable, so they instead work on solutions to make AI safer. Another similar example is that of pandemics research – it has obvious benefits in building resilience to natural pandemics, but may also enable a malicious or accidental outbreak of an engineered pathogen.

Final thoughts

I have not considered the possibility of altering the orbit of an extinction class body (~10 km diameter or greater) in to an Earth intersecting orbit. While the damage of this would obviously be much greater, even ignoring considerations about future generations that would be lost, it would be significantly harder to alter the orbit of such a body. Also, we believe we have discovered all of the bodies of this size in a near Earth orbit (Huebner et al 2009), and so it would be much harder to do this covertly and without risking retaliation (e.g. mutually assured destruction via nuclear weapons). The possibility of altering the orbit of such bodies should still be considered, as it poses an existential/catastrophic risk while smaller bodies do not.

I have also chosen to largely not focus on other types of space weapons (see this book for an overview of space weapons generally) for similar reasons – the potential for dual-use is less clear, thus in theory making it harder to set up such technologies in space. It would also be more difficult to make the utilisation of such weapons look like an accident.

Future work

A cost benefit analysis that examines the pros and cons of developing asteroid deflection technology in a rigorous and numerical way should be a high priority. Such an analysis would consider the expected value of damage of natural asteroid impacts in comparison with the increased risk from developing technology (and possibly examine the opportunity cost of what could otherwise be done with the R&D funding). An example of such an analysis exists in the space of global health pandemics research, which would be a good starting point. I believe it is unclear at this time whether the benefits outweigh the risks, or vice versa (though at this time I lean towards the risks outweighing the benefits – an unfortunate conclusion for a PhD candidate researching asteroid exploration and deflection to come to).

Research regarding the technical feasibility of deflecting an asteroid into a specific target (e.g. a city) should be examined, however this analysis comes with drawbacks (see section on information hazards).

We should also consider policy and international cooperation solutions that can be set in place today to reduce the likelihood of accidental and malicious asteroid deflection occurring.

Information hazard disclaimer

An information hazard is “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” Much of the research in to the risk side of DURCs could be considered an information hazard. For example, a paper that demonstrates how easy it might be to engineer and release an advanced pathogen with the intent of raising concern could make it easier for someone to do just that. It even seems plausible that publishing such a paper could cause more harm than good. Similar research into asteroids as a DURC would have the same issue (indeed, this post itself could be an information hazard).


* An ‘existential threat’ typically refers to an event that could kill either all human life, or all life in general. A ‘catastrophic threat’ refers to an event that would cause substantial damage and suffering, but wouldn’t be expected to kill all human life, which would eventually rebuild.

Leave a Reply

Your email address will not be published. Required fields are marked *