Existential risk

Existential risk (sometimes abbreviated to X-risk) is the term for scientifically plausible risks that may cause the entire human race to become extinct.

Such risks are best studied so we can identify and avoid them. However, we must be careful not to overemphasize risks that really are implausible, at the expense of addressing other serious problems our civilization faces.

Suggested existential risks

  • Large-scale nuclear war. Although this is obviously not something that most people want to be empirically tested, some have argued that large-scale nuclear war would kill off all of humanity. The extinction would likely not happen via the direct casualties, nor even via radiation poisoning, but via nuclear winterFile:Wikipedia's W.svg wiping out the food chain. Recent modelling supports the view that the food chain would be degraded for an extended period of time - possibly long enough for billions of people to starve to death.
  • Asteroid. Civilisation-destroying, very large asteroids (or more generally, near-Earth objects). NEOs come in various different sizes, and only the largest would have the strength to take out the entire human species.
  • Cosmic threats. Events such as a nearby supernova, gamma ray burst, near encounter with a large wandering object (planet, star, blackhole), etc, could directly wipe out all life or disrupt the solar system so badly that the earth is no longer able to sustain life.[1]
  • Unfriendly artificial intelligence. An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Mainly claimed by non-computer science experts, such as Elon Musk and Stephen Hawking, and others who promote the idea of an 'intelligence explosion.' Also claimed by some computer scientists like Stuart Russel.[2]
  • Pandemic. Theoretically, a worldwide pandemic to which humanity lacks immunity, perhaps exacerbated due to the spread of disease vectors via air travel and hostile denialism politics. Some worry such a pandemic may be created via synthetic biological technology.
  • Climate change. Climate change is a real problem in its own right, but some have suggested that it represents a threat to the entire human species.
  • Runaway nanotechnology. It is feared that self-replicating nanomachines could consume all of the biosphere, including us. This is known as the grey goo scenario.
  • Cosmological phase transition. A variety of hypothetical mechanisms have been posited, generally rooted in quantum physics, which could destroy the entire universe. Vacuum decay, the transition from a metastable to a more stable base state of vacuum, and a phase change propagating through all the matter of the universe are suggested mechanisms. Kurt Vonnegut portrayed a smaller-scale fictional version around "Ice-9" which could theoretically cause all the water in the world to change state and render it unable to support life, but that's just fiction and physicists worry about something affecting the entire universe.[3][4]
  • Religious prophecy. Though not generally mentioned by the scientifically-minded like the previous ones are, these apocalyptic predictions have consistently gotten followers for thousands of years.

Plausibility and potential solutions

Large-scale nuclear war

This is obviously a really, really bad thing, but whether such a nuclear war would actually cause the extinction of every last human is debatable. More information can be found at Wikipedia's page on nuclear holocaust. Whether such a thing occurs depends solely on human factors such as politics and diplomacy. The most effective course of action for individuals would be to vote often, vote for the party/candidate who will promote good international relations rather than national pride and machismo, as well as persuading their governments to take these risks seriously, as defending the lives of its citizens is a fundamental duty of a state.

Asteroid

Although asteroid risks are relatively easy to get a handle on, it is impossible to come up with an accurate probability because the frequency of asteroid impacts is not accurately known. Nevertheless, the existential risk posed by asteroids is very, very tiny over human time-frames. This is among the best-understood and best-monitored of all existential risks - NASA carefully tracks all known NEOs that would be large enough to cause the worst damage - although more research is needed on deflection techniques. Billionaire Elon Musk, among others, has apparently serious plans to set up a very large self-sustaining space colony on Mars, and to do it entirely with private money (well, if you ignore the shedloads of money his company SpaceX is getting from the US government, which is essentially subsidising his rocket R&D). Such an off-Earth colony might be a very effective measure to prevent a civilisation-destroying large asteroid from making humanity go extinct, even if deflection attempts are unsuccessful and the asteroid does strike the Earth. Also establishing a colony on a possibly sterile Mars would be difficult enough, but creating a long-term sustaining Martian colony that is independent of Earth is probably at least an order of magnitude more difficult.

Cosmic threats

It isn't clear exactly how damaging a nearby supernova would be, and predictions of the frequency of supernovae vary significantly. One estimate suggests that there is a supernova within 10 parsecs (33 light years. For the record, the currently closest knownFile:Wikipedia's W.svg supernova candidate is 155 light years away), the maximum distance for one to be really harmful for the biosphere, that is estimated every 240 million years; other estimates range from 100 million years to 20 billion years. Gamma ray bursts are extremely energetic, but only last a short period of time, and therefore damage would be limited - one might strip half of the ozone layer, requiring a few years to build up again, but it would be unlikely to cause total extinction immediately. There is little clear evidence of such events causing major damage in earth's past, although the Ordovician–Silurian extinction has been attributed to a supernova or gamma ray burst by some.[5][6] It would be virtually impossible to prevent such an event (we may be able to predict the behavior of well-studied stars, but there is the possibility of either an unobserved pair of white dwarfs/neutron stars or an undiscovered binary system with a white dwarf close enough colliding and going supernova), and current technology would be helpless even with advanced knowledge). Solutions to such problems would involve leaving the earth, although it may be necessary to move a significant distance away from the entire solar system. Alternatively we could sit tight, but without interstellar travel our options are very limited.

Unfriendly artificial intelligence

Aside from the inherent problems in any idea of 'intelligence explosion,' it seems likely that the so-called 'value alignment' problem in AI will be solved well before they reach general intelligence. For example, a cleaning robot must be able to learn what humans consider dirt and garbage, and what they consider valuable property, to be useful. A psychopathic AI which cannot learn and internalize human goals is of no use and will not be developed. Only 8% of respondents of a survey of the 100 most cited authors in the AI field considered AI to present an existential risk.[7] If you're still worried, you have folks like Elon Musk, who is a founder of OpenAI and who recently donated US$10 million towards AI safety research.[8] However, donating money to AI safety research may be wasteful or even counterproductive; for example effective altruist (EA) charity evaluator GiveWell has in fact recommended against giving to MIRI,[9] much to the consternation of some in the LessWrong camp who believed - and still believe - that the EA movement would be a useful Trojan horse for getting more donations sent MIRI's way.

Pandemic

For much of human history pandemics were the most plausible existential risk to mankind. However, our increased understanding of disease and sanitation have gone a significant way towards decreasing this risk. Knowledge of basic mechanisms of disease spread, and how to identify these mechanisms and act to minimize them, go a long way towards decreasing the risk of large scale outbreaks, while the presence of antibiotics and vaccines make treating of potential viral strains more plausible. Drastic measures to contain a pandemic would doubtless be employed by governments as soon as they became aware of the nature of the threat. After the 9/11 attacks, the US government temporarily shut down all air travel over the continental United States, and an X-risk pandemic would obviously be much more dangerous than a small group of airplane hijackers.

One common concern cited by those fearing pandemics is the over-use of antibiotics which risks encouraging antibiotic-resistant strains to develop. It would be difficult for a single bacteria to develop immunity to all forms of antibiotics, owing to the large number of different ones available, but were a bacteria to develop immunity to all known forms humanity would be limited to 'old fashioned' means of disease prevention, such as the use of quarantine and mitigating potential vectors that could spread the disease. However, historical evidence suggests that naturally evolved viruses, even if immune to antibiotics, would be unlikely to read the level of risking eradicating human life. The time period between the discover and implementation of proper sanitation and the development of antibiotics shows that proper sanitation alone significantly decreased the lose of life due to disease even without antibiotics. With superior medical knowledge and resources available modern man should ideally be better suited to preventing the spread of even a hypothetical antibiotic-immune disease.

Arguably the most probable cause of a life threatening disease would be genetic engineering, which could theoretically create a disease that is immune to all known counter measures, as well as develop techniques to increase the lethality of the disease or make the disease harder to contain, such as an extended dormant face where an individual is infectious but not obviously ill, that are unlikely to develop through standard evolutionary means [note 1]. Lucky for humanity biology is way more complicated than many sci-fi fans assume, so engineering a pathogen is very, very difficult, and out of reach for a small band of terrorists. Furthermore most people capable of creating such a theoretical engineered pathogen are unlikely to wish to kill themselves, their loved ones, and all of humanity {fact}. There are methods that can be put into place to limit the potential spread of engineered diseases that most experienced genetic engineers are ideally using to prevent this sort of scenario.

Climate change

Climate change is currently not thought to be an existential risk, at least within the next 100 years - although more research is needed on worst-case climate scenarios. The non-existential reality is bad enough, though. Hypothetically, a runaway greenhouse effectFile:Wikipedia's W.svg is a situation where a planet gets hotter and hotter through a positive feedback loop until all the oceans boil off and there is no possibility of sustaining life, as happened on Venus. However, this is considered virtually impossible on the Earth.[10]

Runaway nanotechnology

Eric Drexler, who popularized the idea of nanotechnology, points out that grey goo (nanotech that accidentally eats everything on the planet and turns it into goo) is not an existential risk because it is not a realistic risk at all - although this does not rule out more deliberate uses of nanotechnology for military ends. This is aside from the various fundamental problems with nanotechnology itself.

Cosmological phase transition

It isn't clear how likely it is, and it depends on a full understanding of subatomic physics.[11] The fact that it's not happened in the last 14 billion years suggests it isn't terribly likely, and some estimations putting it in the very distant future in cosmological terms, but if it did happen, we'd be shit outta luck. More info here.

Religious prophecy

Religious and other mystical claims of the impending end of the world are based on unverifiable visions, "voices from God" which conveniently only one person can hear, or unique interpretations of holy books, and in particular, numerology. Sometimes, they involve believers handing over all their savings to the person making the warning, who then conveniently decides to keep the money after the predicted end of the world fails to materialise.

Stop technological progress?

Some people, such as Sun Microsystems' former chief scientist Bill Joy[12] and MIRI's former Director of Research Ben Goertzel,[13] have argued that in order to avoid existential risks, we ought to halt the march of technological progress, to a greater or lesser extent, either temporarily or permanently. A tiny minority (not necessarily acting out of concern over existential risk) have even decided that it is appropriate to resort to violence to achieve their aims of stopping certain technologies.[14]

However, it is almost impossible to achieve technological relinquishment in any useful (i.e. global) way, even if it was considered desirable. Even if America and Europe both ban a technology, if it is useful, one or more countries which face different cultural, political and economic constraints, such as China for example, will probably eventually develop it, and quite likely out-compete those who don't adopt the technology. We are better off exploring other options.

In addition, while it is true that technology itself causes new problems (nuclear proliferation), it is the only solution to old problems (famine). Of all the existential risks considered here, only asteroids are known for sure to be a real risk and capable of wiping out entire species, and the only possible solutions to this existential risk involve high technology.

While the machines rising up against their masters is a common sci-fi trope, in reality, there is no incentive to build any machine or AI that is not a tool in humanity's hand, with no will of its own other than what humans give it.[15]

Reducing Existential Risk

Various organizations are committed to reducing or spreading awareness about Existential Risk. One of them is Nick Bostrom's Future of Humanity Institute at Oxford University. Nick Bostrom wrote an article known as 'Astronomical Waste'. He argued that the continued existence of human civilization has immense moral value. Vast populations of humans could exist with space colonization, and their lives would be happy because of advanced technology. So, reducing existential risk is the most valuable cause. [16]

In 2014, an organization known as the Future of Life Institute was established. Its goal is somewhat like FHI. Its founders included the scientist Max Tegmark, Skype co-founder Jaan Tallinn etc. In 2018, its scientific advisory board included Elon Musk and Stephen Hawking. [17] Their primary goal has been to spread awareness about AI 'risk'. They distributed 2 million dollars to 10 researchers who they deemed to be carrying out AI risk-reducing research. [18] On the other hand, they don't believe that arrival of human-level AI is imminent and say that it is decades away or might not even happen in the 21st century. But they focus on AI risk because solving AI control problem will take a long time. [19]

In 2012, at Cambridge University, Centre for the Study of Existential Risk was established. The founders included the above-mentioned Jaan Tallinn and Lord Martin Rees (Astronomer Royal). They have collaborated with FHI. [20] [21] Other such organizations include Global Catastrophic Risk Institute,[22] X-Risks Institute[23]Saving Humanity from Homo Sapiens [24], Lifeboat Foundation[25], Foresight Institute[26] and Skoll Global Threat Fund[27].

Many organizations are combatting specific forms of existential risk. Those combating or claiming to combat AI-related x-risk only include : (i) Centre for Human-Compatible AI[28] (ii) Machine Intelligence Research Institute[29] (iii) Leverhulme Centre for the Future of Intelligence[30]

gollark: This would be *much* nicer with parser combinators.
gollark: Wait, no, NOT split at commas between square brackets.
gollark: Lua is quite lacking in string processing stuff though.
gollark: I think you just want "split at commas between square brackets".
gollark: Then that's just weird.

See also

Notes

  1. A mutation can make an existing cotangent lethal, but it's unlikely to make it lethal only after x days of being non-lethal. In (overly-simplified) reasoning if a cotangent is proving successful at spreading itself without harming it's host it has little motivation to switch to killing it's host, since that will prevent the host from further spreading the contagion once dead.

References

  1. See the Wikipedia article on Global catastrophic risk.
  2. https://people.eecs.berkeley.edu/~russell/research/future/
  3. Vacuum decay: the ultimate catastrophe, Cosmos Magazine, Sep 14, 2015
  4. Q: Could Kurt Vonnegut’s “Ice-9 catastrophe” happen?, Ask A Mathematician: Ask A Physicist, Nov 3, 2012
  5. See the Wikipedia article on Near-Earth supernova.
  6. Gamma Ray Burst Mass Extinction, Cosmos: Study Astronomy Online at Swinburne University, accessed 18 Mar 2019
  7. https://nickbostrom.com/papers/survey.pdf Muller & Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion
  8. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/, Future of Life, Oct 12, 2015
  9. "Thoughts on the Singularity Institute." MIRI was formerly called the Singularity Institute for Artificial Intelligence.
  10. See the Wikipedia article on Tipping points in the climate system.
  11. See the Wikipedia article on Why The Future Doesn't Need Us.
  12. Goertzel, Ben. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood? Journal of Consciousness Studies, 2012
  13. A luddite link to nano-terrorists, Michele Catanzaro, The Guardian,Fri 8 Nov 2013
  14. Three Arguments Against the Singularity, Charlie Stross, Antipope.org
  15. https://nickbostrom.com/astronomical/waste.html
  16. https://en.wikipedia.org/wiki/Future_of_Life_Institute
  17. https://futureoflife.org/ai-safety-research/
  18. https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
  19. https://www.cser.ac.uk/
  20. https://www.lesswrong.com/posts/2idJBvzzj3dP36HSA/update-on-establishment-of-cambridge-s-centre-for-study-of
  21. http://gcrinstitute.org/
  22. https://ieet.org/index.php/IEET2/more/torres20151030
  23. http://shfhs.org/whatarexrisks.html
  24. https://lifeboat.com/ex/programs
  25. https://foresight.org/about-us/our-mission/
  26. http://www.skollglobalthreats.org/about-us/mission-and-approach/
  27. https://humancompatible.ai/
  28. intelligence.org
  29. http://lcfi.ac.uk/
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.