Artificial intelligence arms race

A military artificial intelligence arms race is a competition between two or more states to have their military forces equipped with the best artificial intelligence (AI). Since the mid-2010s, many analysts have argued that such a global arms race for better military AI has already begun.[1][2]

Terminology

More broadly, any competition for superior AI is sometimes framed as an "arms race".[3][4] A quest for military AI dominance overlaps with a quest for dominance in other sectors, especially as a country pursues both economic and military advantage.[5]

Risks

Nick Bostrom and others argue an AI risk could cause powers to skimp on safety precautions.[6]

Stephen Cave of the Leverhulme Centre argues the risk is threefold, with the first risk potentially having geopolitical implications, and the second two definitely having geopolitical implications:

i) The dangers of an AI 'race for technological advantage' framing, regardless of whether the race is seriously pursued;

ii) The dangers of an AI 'race for technological advantage' framing and an actual AI race for technological advantage, regardless of whether the race is won;

iii) The dangers of an AI race for technological advantage being won.[7]

Cave argues the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk.[8]

Arms-race terminology is also sometimes used in the context of competition for economic dominance and "soft power"; for example, the November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence, while stressing the role of diplomacy in engaging with China and Russia, adopts the language of a competitive arms race.[9] It states that US military-technological superiority is vital to the existing world order[10]:11 and stresses the ongoing US militarization of AI, together with militarization of AI by China and Russia, is for geopolitical purposes:[10]:1-2

Developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. We are concerned that America’s role as the world’s leading innovator is threatened. We are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies, and our values. We know strategic competitors are investing in research and application. It is only reasonable to conclude that AI-enabled capabilities could be used to threaten our critical infrastructure, amplify disinformation campaigns, and wage war.

In Foreign Policy, Paul Scharre warns that rhetoric about an AI arms race could, itself, amplify into a self-fulfilling prophecy.[11]

Stances toward military artificial intelligence

Russia

Putin (seated, center) at National Knowledge Day, 2017

Russian General Viktor Bondarev, commander-in-chief of the Russian air force, stated that as early as February 2017, Russia was working on AI-guided missiles that could decide to switch targets mid-flight.[12] Russia’s Military Industrial Committee has approved plans to derive 30 percent of Russia’s combat power from remote controlled and AI-enabled robotic platforms by 2030. Reports by state-sponsored Russian media on potential military uses of AI increased in mid-2017.[13] In May 2017, the CEO of Russia's Kronstadt Group, a defense contractor, stated that "there already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact", and that it is inevitable that "swarms of drones" will one day fly over combat zones.[14] Russia has been testing several autonomous and semi-autonomous combat systems, such as Kalashnikov's "neural net" combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention.[15]

In September 2017, during a National Knowledge Day address to over a million students in 16,000 Russian schools, Russian President Vladimir Putin stated "Artificial intelligence is the future, not only for Russia but for all humankind... Whoever becomes the leader in this sphere will become the ruler of the world". Putin also said it would be better to prevent any single actor achieving a monopoly, but that if Russia became the leader in AI, they would share their "technology with the rest of the world, like we are doing now with atomic and nuclear technology".[16][17] [18]

Russia is establishing a number of organizations devoted to the development of military AI. In March 2018, the Russian government released a 10-point AI agenda, which calls for the establishment of an AI and Big Data consortium, a Fund for Analytical Algorithms and Programs, a state-backed AI training and education program, a dedicated AI lab, and a National Center for Artificial Intelligence, among other initiatives.[19] In addition, Russia recently created a defense research organization, roughly equivalent to DARPA, dedicated to autonomy and robotics called the Foundation for Advanced Studies, and initiated an annual conference on “Robotization of the Armed Forces of the Russian Federation.”[20][21]

The Russian military has been researching a number of AI applications, with a heavy emphasis on semiautonomous and autonomous vehicles. In an official statement on November 1, 2017, Viktor Bondarev, chairman of the Federation Council’s Defense and Security Committee, stated that “artificial intelligence will be able to replace a soldier on the battlefield and a pilot in an aircraft cockpit” and later noted that “the day is nearing when vehicles will get artificial intelligence.”[22] Bondarev made these remarks in close proximity to the successful test of Nerehta, an uninhabited Russian ground vehicle that reportedly “outperformed existing [inhabited] combat vehicles.” Russia plans to use Nerehta as a research and development platform for AI and may one day deploy the system in combat, intelligence gathering, or logistics roles.[23] Russia has also reportedly built a combat module for uninhabited ground vehicles that is capable of autonomous target identification—and, potentially, target engagement—and plans to develop a suite of AI-enabled autonomous systems.[24][25][21]

In addition, the Russian military plans to incorporate AI into uninhabited aerial, naval, and undersea vehicles and is currently developing swarming capabilities.[20] It is also exploring innovative uses of AI for remote sensing and electronic warfare, including adaptive frequency hopping, waveforms, and countermeasures.[26][27] Russia has also made extensive use of AI technologies for domestic propaganda and surveillance, as well as for information operations directed against the United States and U.S. allies.[28][29][21]

The Russian government has strongly rejected any ban on lethal autonomous weapon systems, suggesting that such a ban could be ignored.[30][31]

China

China is pursuing a strategic policy of 'military-civil fusion' on AI for global technological supremacy.[10][32] According to a February 2019 report by Gregory C. Allen of the Center for a New American Security, China’s leadership – including paramount leader Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.[5] Chinese military officials have said that their goal is to incorporate commercial AI technology to "narrow the gap between the Chinese military and global advanced powers."[5] The close ties between Silicon Valley and China, and the open nature of the American research community, has made the West's most advanced AI technology easily available to China; in addition, Chinese industry has numerous home-grown AI accomplishments of its own, such as Baidu passing a notable Chinese-language speech recognition capability benchmark in 2015.[33] As of 2017, Beijing's roadmap aims to create a $150 billion AI industry by 2030.[34] Before 2013, Chinese defense procurement was mainly restricted to a few conglomerates; however, as of 2017, China often sources sensitive emerging technology such as drones and artificial intelligence from private start-up companies.[35] One Chinese state has pledged to invest $5 billion in AI. Beijing has committed $2 billion to an AI development park.[36] The Japan Times reported in 2018 that annual private Chinese investment in AI is under $7 billion per year. AI startups in China received nearly half of total global investment in AI startups in 2017; the Chinese filed for nearly five times as many AI patents as did Americans.[37]

China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue.[38] In 2018, Xi called for greater international cooperation in basic AI research.[39] Chinese officials have expressed concern that AI such as drones could lead to accidental war, especially in the absence of international norms.[40] In 2019, US Defense Secretary Mark Esper lashes out at China for selling drones capable of taking life with no human oversight.[41]

United States

The Sea Hunter, an autonomous US warship, 2016

In 2014, former Secretary of Defense Chuck Hagel posited the "Third Offset Strategy" that rapid advances in artificial intelligence will define the next generation of warfare.[42] According to data science and analytics firm Govini, the U.S. Department of Defense increased investment in artificial intelligence, big data and cloud computing from $5.6 billion in 2011 to $7.4 billion in 2016.[43] However, the civilian NSF budget for AI saw no increase in 2017.[34] Japan Times reported in 2018 that the United States private investment is around $70 billion per year.[37] The November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence confirmed that AI is critical to US technological military superiority.[10]

The U.S. has many military AI combat programs, such as the Sea Hunter autonomous warship, which is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port.[15] From 2017, a temporary US Department of Defense directive requires a human operator to be kept in the loop when it comes to the taking of human life by autonomous weapons systems.[44] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[45]

Project Maven is a Pentagon project involving using machine learning and engineering talent to distinguish people and objects in drone videos,[46] apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Reportedly it stops short of acting as an AI weapons system capable of firing on self-designated targets[47]. The project was established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017.[48] Also known as the Algorithmic Warfare Cross Functional Team,[49] it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department".[50] Its chief, U.S. Marine Corps Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects."[51] At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department [of Defense] through its "rapid acquisition authorities" for about "the next 36 months".[52]

On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[53]

United Kingdom

In 2015, the UK government opposed a ban on lethal autonomous weapons, stating that "international humanitarian law already provides sufficient regulation for this area", but that all weapons employed by UK armed forces would be "under human oversight and control".[54]

Israel

Israel's Harpy anti-radar "fire and forget" drone is designed to be launched by ground troops, and autonomously fly over an area to find and destroy radar that fits pre-determined criteria.[55]

South Korea

The South Korean Super aEgis II machine gun, unveiled in 2010, sees use both in South Korea and in the Middle East. It can identify, track, and destroy a moving target at a range of 4 km. While the technology can theoretically operate without human intervention, in practice safeguards are installed to require manual input. A South Korean manufacturer states, "Our weapons don't sleep, like humans must. They can see in the dark, like humans can't. Our technology therefore plugs the gaps in human capability", and they want to "get to a place where our software can discern whether a target is friend, foe, civilian or military".[56]

According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015.[57][58]

China became a top player in artificial intelligence research in the 2010s. According to the Financial Times, in 2016, for the first time, China published more AI papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union.[34] 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese.[59] Eric Schmidt, the former chairman of Alphabet, has predicted China will be the leading country in AI by 2025.[60]

AAAI presenters:[59]
Countryin 2012in 2017
US 41%34%
China 10%23%
UK 5%5%

Proposals for international regulation

The international regulation of autonomous weapons is an emerging issue for international law.[61] AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.[62][63] As early as 2007, scholars such as AI professor Noel Sharkey have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions".[64][65] In 2014, AI specialist Steve Omohundro warned that "An autonomous weapons arms race is already taking place".[66] Miles Brundage of the University of Oxford has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends".[67] Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons;[68][69] however, at a November 2017 session of the UN Convention on Certain Conventional Weapons (CCW), diplomats could not agree even on how to define such weapons.[70] The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect. As of 2017, twenty-two countries have called for a full ban on lethal autonomous weapons.[71]

Many experts believe attempts to completely ban killer robots are likely to fail.[72] A 2017 report from Harvard's Belfer Center predicts that AI has the potential to be as transformative as nuclear weapons.[67][73][74] The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI dead man's switch to a nuclear arsenal.[74] Part of the impracticality is that detecting treaty violations would be extremely difficult.[75][76]

Other reactions to autonomous weapons

A 2015 open letter calling for the ban of lethal automated weapons systems has been signed by tens of thousands of citizens, including scholars such as physicist Stephen Hawking, Tesla magnate Elon Musk, and Apple's Steve Wozniak.[70]

Professor Noel Sharkey of the University of Sheffield has warned that autonomous weapons will inevitably fall into the hands of terrorist groups such as the Islamic State.[77]

Disassociation

Many Western tech companies are leery of being associated too closely with the U.S. military, for fear of losing access to China's market.[33] Furthermore, some researchers, such as DeepMind's Demis Hassabis, are ideologically opposed to contributing to military work.[78]

For example, in June 2018, company sources at Google said that top executive Diane Greene told staff that the company would not follow-up Project Maven after the current contract expired in March 2019.[46]

gollark: There were plans to use it for housing.
gollark: Technically it's a hollow sphere. I own it for purposes.
gollark: There's no particular downside to sending dimension except minor additional code complexity.
gollark: Or remotely querying a bunch of computers to ask where they are.
gollark: Anyway, it's useful for stuff like interfacing with dynmaps.

See also

References

  1. Geist, Edward Moore (2016-08-15). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists. 72 (5): 318–321. Bibcode:2016BuAtS..72e.318G. doi:10.1080/00963402.2016.1216672. ISSN 0096-3402.
  2. Maas, Matthijs M. (2019-02-06). "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons". Contemporary Security Policy. 40 (3): 285–311. doi:10.1080/13523260.2019.1576464. ISSN 1352-3260.
  3. Roff, Heather M. (2019-04-26). "The frame problem: The AI "arms race" isn't one". Bulletin of the Atomic Scientists. 75 (3): 95–98. Bibcode:2019BuAtS..75c..95R. doi:10.1080/00963402.2019.1604836. ISSN 0096-3402.
  4. "For Google, a leg up in the artificial intelligence arms race". Fortune. 2014. Retrieved 11 April 2020.
  5. Allen, Gregory. "Understanding China's AI Strategy". Center for a New American Security. Center for a New American Security. Retrieved 15 March 2019.
  6. Armstrong, Stuart; Bostrom, Nick; Shulman, Carl (2015-08-01). "Racing to the precipice: a model of artificial intelligence development". AI & Society. 31 (2): 201–206. doi:10.1007/s00146-015-0590-y. ISSN 0951-5666.
  7. Cave, Stephen; ÓhÉigeartaigh, Seán S. (2018). "An AI Race for Strategic Advantage". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society - AIES '18. New York, New York, USA: ACM Press: 36–40. doi:10.1145/3278721.3278780. ISBN 978-1-4503-6012-8.
  8. Cave, Stephen; ÓhÉigeartaigh, Seán S. (2018). "An AI Race for Strategic Advantage". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society - AIES '18. New York, New York, USA: ACM Press: 2. doi:10.1145/3278721.3278780. ISBN 978-1-4503-6012-8.
  9. Pascal, Tim Hwang, Alex. "Artificial Intelligence Isn't an Arms Race". Foreign Policy. Retrieved 2020-04-07.
  10. Interim Report. Washington, DC: National Security Commission on Artificial Intelligence. 2019.
  11. Scharre, Paul (18 February 2020). "Killer Apps: The Real Dangers of an AI Arms Race". Retrieved 15 March 2020.
  12. "Russia is building a missile that can makes its own decisions". Newsweek. 20 July 2017. Retrieved 24 December 2017.
  13. "Why Elon Musk is right about the threat posed by Russian artificial intelligence". The Independent. 6 September 2017. Retrieved 24 December 2017.
  14. "Russia is developing autonomous "swarms of drones" it calls an inevitable part of future warfare". Newsweek. 15 May 2017. Retrieved 24 December 2017.
  15. Smith, Mark (25 August 2017). "Is 'killer robot' warfare closer than we think?". BBC News. Retrieved 24 December 2017.
  16. "Artificial Intelligence Fuels New Global Arms Race". WIRED. Retrieved 24 December 2017.
  17. Clifford, Catherine (29 September 2017). "In the same way there was a nuclear arms race, there will be a race to build A.I., says tech exec". CNBC. Retrieved 24 December 2017.
  18. Radina Gigova (2 September 2017). "Who Vladimir Putin thinks will rule the world". CNN. Retrieved 22 March 2020.
  19. "Here's How the Russian Military Is Organizing to Develop AI". Defense One. Retrieved 2020-05-01.
  20. "Red Robots Rising: Behind the Rapid Development of Russian Unmanned Military Systems". The Strategy Bridge. Retrieved 2020-05-01.
  21. Congressional Research Service (2019). Artificial Intelligence and National Security (PDF). Washington, DC: Artificial Intelligence and National Security. This article incorporates text from this source, which is in the public domain.
  22. Bendett, Samuel (2017-11-08). "Should the U.S. Army Fear Russia's Killer Robots?". The National Interest. Retrieved 2020-05-01.
  23. "Russia Says It Will Field a Robot Tank that Outperforms Humans". Defense One. Retrieved 2020-05-01.
  24. Greene, Tristan (2017-07-27). "Russia is developing AI missiles to dominate the new arms race". The Next Web. Retrieved 2020-05-01.
  25. Mizokami, Kyle (2017-07-19). "Kalashnikov Will Make an A.I.-Powered Killer Robot". Popular Mechanics. Retrieved 2020-05-01.
  26. Dougherty, Jill; Jay, Molly. "Russia Tries to Get Smart about Artificial Intelligence". Wilson Quarterly.
  27. "Russian AI-Enabled Combat: Coming to a City Near You?". War on the Rocks. 2019-07-31. Retrieved 2020-05-01.
  28. Polyakova, Alina (2018-11-15). "Weapons of the weak: Russia and AI-driven asymmetric warfare". Brookings. Retrieved 2020-05-01.
  29. Polyakova, Chris Meserole, Alina. "Disinformation Wars". Foreign Policy. Retrieved 2020-05-01.
  30. "Russia rejects potential UN 'killer robots' ban, official statement says". Institution of Engineering and Technology. 1 December 2017. Retrieved 12 January 2018.
  31. "Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, Russian Federation, November 2017" (PDF). Retrieved 12 January 2018.
  32. "Technology, Trade, and Military-Civil Fusion: China's Pursuit of Artificial Intelligence, New Materials, and New Energy | U.S.- CHINA | ECONOMIC and SECURITY REVIEW COMMISSION". www.uscc.gov. Retrieved 2020-04-04.
  33. Markoff, John; Rosenberg, Matthew (3 February 2017). "China's Intelligent Weaponry Gets Smarter". The New York Times. Retrieved 24 December 2017.
  34. "China seeks dominance of global AI industry". Financial Times. 15 October 2017. Retrieved 24 December 2017.
  35. "China enlists start-ups in high-tech arms race". Financial Times. 9 July 2017. Retrieved 24 December 2017.
  36. Metz, Cade (12 February 2018). "As China Marches Forward on A.I., the White House Is Silent". The New York Times. Retrieved 14 February 2018.
  37. "The artificial intelligence race heats up". The Japan Times. 1 March 2018. Retrieved 5 March 2018.
  38. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Retrieved 24 December 2017.
  39. Pecotic, Adrian (2019). "Whoever Predicts the Future Will Win the AI Arms Race". Foreign Policy. Retrieved 16 July 2019.
  40. Vincent, James (6 February 2019). "China is worried an AI arms race could lead to accidental war". The Verge. Retrieved 16 July 2019.
  41. "Is China exporting killer robots to Mideast?". Asia Times. 2019-11-28. Retrieved 2019-12-21.
  42. "US risks losing AI arms race to China and Russia". CNN. 29 November 2017. Retrieved 24 December 2017.
  43. Davenport, Christian (3 December 2017). "Future wars may depend as much on algorithms as on ammunition, report says". Washington Post. Retrieved 24 December 2017.
  44. "US general warns of out-of-control killer robots". CNN. 18 July 2017. Retrieved 24 December 2017.
  45. United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  46. "Google 'to end' Pentagon Artificial Intelligence project". BBC News. 2 June 2018. Retrieved 3 June 2018.
  47. "Report: Palantir took over Project Maven, the military AI program too unethical for Google". The Next Web. 11 December 2020. Retrieved 17 January 2020.
  48. Robert O. Work (26 April 2017). "Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven)" (PDF). Retrieved 3 June 2018.
  49. "Google employees resign in protest against Air Force's Project Maven". Fedscoop. 14 May 2018. Retrieved 3 June 2018.
  50. Allen, Gregory C. (21 December 2017). "Project Maven brings AI to the fight against ISIS". Bulletin of the Atomic Scientists. Retrieved 3 June 2018.
  51. Ethan Baron (3 June 2018). "Google Backs Off from Pentagon Project After Uproar: Report". Military.com. Mercury.com. Retrieved 3 June 2018.
  52. Cheryl Pellerin (21 July 2017). "Project Maven to Deploy Computer Algorithms to War Zone by Year's End". DoD News, Defense Media Activity. United States Department of Defense. Retrieved 3 June 2018.
  53. United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  54. Gibbs, Samuel (20 August 2017). "Elon Musk leads 116 experts calling for outright ban of killer robots". The Guardian. Retrieved 24 December 2017.
  55. "'Killer robots': autonomous weapons pose moral dilemma | World| Breakings news and perspectives from around the globe | DW | 14.11.2017". DW.COM. 14 November 2017. Retrieved 12 January 2018.
  56. Parkin, Simon (16 July 2015). "Killer robots: The soldiers that never sleep". BBC. Retrieved 13 January 2018.
  57. "Getting to grips with military robotics". The Economist. 25 January 2018. Retrieved 7 February 2018.
  58. "Autonomous Systems: Infographic". www.siemens.com. Retrieved 7 February 2018.
  59. Kopf, Dan (2018). "China is rapidly closing the US's lead in AI research". Quartz. Retrieved 7 February 2018.
  60. "The battle for digital supremacy". The Economist. 2018. Retrieved 19 March 2018.
  61. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Retrieved 2019-09-14.
  62. Geist, Edward Moore (2016-08-15). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists. 72 (5): 318–321. Bibcode:2016BuAtS..72e.318G. doi:10.1080/00963402.2016.1216672. ISSN 0096-3402.
  63. Maas, Matthijs M. (2019-02-06). "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons". Contemporary Security Policy. 40 (3): 285–311. doi:10.1080/13523260.2019.1576464. ISSN 1352-3260.
  64. "Ban on killer robots urgently needed, say scientists". The Guardian. 13 November 2017. Retrieved 24 December 2017.
  65. Sharkey, Noel (17 August 2007). "Noel Sharkey: Robot wars are a reality". The Guardian. Retrieved 11 January 2018.
  66. Markoff, John (11 November 2014). "Fearing Bombs That Can Pick Whom to Kill". The New York Times. Retrieved 11 January 2018.
  67. "AI Could Revolutionize War as Much as Nukes". WIRED. Retrieved 24 December 2017.
  68. Gibbs, Samuel (20 August 2017). "Elon Musk leads 116 experts calling for outright ban of killer robots". The Guardian. Retrieved 11 January 2018.
  69. "An Open Letter to the United Nations Convention on Certain Conventional Weapons". Future of Life Institute. Retrieved 14 January 2018.
  70. "Rise of the killer machines". Asia Times. 24 November 2017. Retrieved 24 December 2017.
  71. "'Robots are not taking over,' says head of UN body on autonomous weapons". The Guardian. 17 November 2017. Retrieved 14 January 2018.
  72. "Sorry, Banning 'Killer Robots' Just Isn't Practical". WIRED. 22 August 2017. Retrieved 14 January 2018.
  73. McFarland, Matt (14 November 2017). "'Slaughterbots' film shows potential horrors of killer drones". CNNMoney. Retrieved 14 January 2018.
  74. Allen, Greg, and Taniel Chan. "Artificial Intelligence and National Security." Report. Harvard Kennedy School, Harvard University. Boston, MA (2017).
  75. Antebi, Liran. "Who Will Stop the Robots?." Military and Strategic Affairs 5.2 (2013).
  76. Shulman, C., & Armstrong, S. (2009, July). Arms control and intelligence explosions. In 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July (pp. 2-4).
  77. Wheeler, Brian (30 November 2017). "Terrorists 'certain' to get killer robots". BBC News. Retrieved 24 December 2017.
  78. Metz, Cade (15 March 2018). "Pentagon Wants Silicon Valley's Help on A.I." The New York Times. Retrieved 19 March 2018.

Further reading

  • Paul Scharre, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although thesddsy perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
  • The National Security Commission on Artificial Intelligence. (2019). Interim Report. Washington, DC: Author.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.