Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent entities.[1][2] It can be divided into a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). It also includes the issues of singularity and superintelligence.

Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[3] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.

Robot rights

"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights.[4] It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[5] These could include the right to life and liberty, freedom of thought and expression and equality before the law.[6] The issue has been considered by the Institute for the Future[7] and by the U.K. Department of Trade and Industry.[8]

Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[8] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[9] Ray Kurzweil sets the date at 2029.[10] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[11]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[12]

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition.[13] Some saw this gesture as openly denigrating of human rights and the rule of law.[14]

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[15]

Threat to human dignity

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[16]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[16] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[17] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.

Bill Hibbard[18] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Transparency, accountability, and open source

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[19] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[20] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity.[21] There are numerous other open source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardisation effort on AI transparency.[22] The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.[23]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. An updated collection (list) of AI Ethics is maintained by AlgorithmWatch. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[24] The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[25][26][27]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”.[28] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principle subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity, and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[29]

Biases in AI systems

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.[30][31][32][33] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender.[34] These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[35] Similarly, Amazon's.com Inc's termination of AI hiring and recruitment is another example which exhibits that AI cannot be fair. The algorithm preferred more male candidates than female. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.[36]

Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[37] In a highly influential branch of AI known as "natural language processing," problems can arise from the "text corpus"—the source material the algorithm uses to learn about the relationships between different words.[38]

Large companies such as IBM, Google, etc. started researching and addressing bias.[39][40][41] One solution for addressing bias is to create documentation for the data used to train AI systems.[42]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.[43]

Liability for self-driving cars

The wide use of partial to fully autonomous cars seems to be imminent in the future. But fully autonomous technologies present new issues and challenges.[44][45][46] Recently, a debate over the legal liability have risen over the responsible party if these cars get into accidents.[47][48] In one of the reports [49] a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers.

In one case that took place on March 19, 2018 a self-driving Uber Car kills pedestrian in Arizona-Death of Elaine Herzberg which alternatively leads to the death of that jaywalking pedestrian. Without further investigation on how the pedestrian got injury/death in such a case. It is important for people to reconsider the liability not only for those partial or fully automated cars, but those stakeholders who should be responsible for such a situation as well. In this case, the automated cars have the function of detecting nearby possible cars and objects in order to run the function of self-driven, but it did not have the ability to react to nearby pedestrian within its original function due to the fact that there will not be people appear on the road in a normal sense. This leads to the issue of whether the driver, pedestrian, the car company, or the government should be responsible in such a case.

According to this article,[50] with the current partial or fully automated cars' function are still amateur which still require driver to pay attention with fully control the vehicle since all these functions/feature are just supporting driver to be less tried while they driving, but not let go. Thus, the government should be most responsible for current situation on how they should regulate the car company and driver who are over-rely on self-driven feature as well educated them that these are just technologies that bring convenience to people life but not a short-cut. Before autonomous cars become widely used, these issues need to be tackled through new policies.[51][52][53]

Weaponization of artificial intelligence

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[54][55] On October 31, 2019, the Unites States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[56] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[57][58] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[59] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[60]

There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[61] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[62]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[63]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[62]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[64]

Machine ethics

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[65][66][67][68] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[69]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[70] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[71] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[72]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[73]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[54] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[74][75] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[76] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[77] He suggests that it may be somewhat or possibly very dangerous for humans.[78] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[79]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[77]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[80] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[81] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[82] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[83]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.[84]

Singularity

Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[85] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[86]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[87]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[85][86] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[88] AI researchers such as Stuart J. Russell[89]:173 and Bill Hibbard[18] have proposed design strategies for developing beneficial machines.

AI ethics organisations

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."[90] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[91]

A number of organizations pursue a technical theory of AI goal-system alignment with human values. Among these are the Machine Intelligence Research Institute, the Future of Humanity Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.

In fiction

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[96]

Literature

The standard bibliography is on PhilPapers on ethics of AI and robot ethics.

"Ethics of Artificial Intelligence and Robotics" (April 2020) in the Stanford Encyclopedia of Philosophy is a comprehensive exposition of the academic debates.

gollark: https://dragcave.net/lineage/w3YVz x https://dragcave.net/lineage/DLdkF hopefully...
gollark: I'm going to breed them with my messiest dragons to keep the faith.
gollark: The "2Gs" are fun and messy too.
gollark: Many "2G"s indeed.
gollark: * was

See also

Notes

  1. Fjeld, Jessica; Achten, Nele; Hilligoss, Hannah; Nagy, Adam; Srikumar, Madhulika (2020). "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI". SSRN Working Paper Series. doi:10.2139/ssrn.3518482. ISSN 1556-5068.
  2. Jobin, Anna; Ienca, Marcello; Vayena, Effy (2019). "The global landscape of AI ethics guidelines". Nature Machine Intelligence. 1 (9): 389–399. doi:10.1038/s42256-019-0088-2. ISSN 2522-5839.
  3. Veruggio, Gianmarco (2007). "The Roboethics Roadmap". Scuola di Robotica: 2. CiteSeerX 10.1.1.466.2810. Cite journal requires |journal= (help)
  4. Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
  5. Sheliazhenko, Yurii (2017). "Artificial Personal Autonomy and Concept of Robot Rights". European journal of law and political sciences. Retrieved 10 May 2017. Cite journal requires |journal= (help)
  6. The American Heritage Dictionary of the English Language, Fourth Edition
  7. "Robots could demand legal rights". BBC News. December 21, 2006. Retrieved January 3, 2010.
  8. Henderson, Mark (April 24, 2007). "Human rights for robots? We're getting carried away". The Times Online. The Times of London. Retrieved May 2, 2010.
  9. McGee, Glenn. "A Robot Code of Ethics". The Scientist.
  10. Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3.CS1 maint: ref=harv (link)
  11. The Big Question: Should the human race be worried by the rise of robots?, Independent Newspaper,
  12. Loebner Prize Contest Official Rules — Version 2.0 The competition was directed by David Hamill and the rules were developed by members of the Robitron Yahoo group.
  13. Saudi Arabia bestows citizenship on a robot named Sophia
  14. Vincent, James (30 October 2017). "Pretending to give a robot citizenship helps no one". The Verge.
  15. Close engagements with artificial companions : key social, psychological, ethical and design issues. Wilks, Yorick, 1939-. Amsterdam: John Benjamins Pub. Co. 2010. ISBN 978-9027249944. OCLC 642206106.CS1 maint: others (link)
  16. Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
  17. "Kaplan Andreas; Michael Haenlein (2018) Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence, Business Horizons, 62(1)". Archived from the original on 2018-11-21. Retrieved 2018-11-27.
  18. Hibbard, Bill (2014): "Ethical Artificial Intelligence".
  19. Open Source AI. Bill Hibbard. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel and Stan Franklin.
  20. OpenCog: A Software Framework for Integrative Artificial General Intelligence. David Hart and Ben Goertzel. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel and Stan Franklin.
  21. Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free Cade Metz, Wired 27 April 2016.
  22. "P7001 - Transparency of Autonomous Systems". P7001 - Transparency of Autonomous Systems. IEEE. Retrieved 10 January 2019..
  23. Thurm, Scott (July 13, 2018). "MICROSOFT CALLS FOR FEDERAL REGULATION OF FACIAL RECOGNITION". Wired.
  24. Bastin, Roland; Wantz, Georges (June 2017). "The General Data Protection Regulation Cross-industry innovation" (PDF). Inside magazine. Deloitte.
  25. "UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges'". UN News. 2017-06-07. Retrieved 2019-07-26.
  26. "Artificial intelligence - Organisation for Economic Co-operation and Development". www.oecd.org. Retrieved 2019-07-26.
  27. Anonymous (2018-06-14). "The European AI Alliance". Digital Single Market - European Commission. Retrieved 2019-07-26.
  28. European Commission High-Level Expert Group on AI (2019-06-26). "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-03-16.
  29. "EU Tech Policy Brief: July 2019 Recap". Center for Democracy & Technology. Retrieved 2019-08-09.
  30. Society, DeepMind Ethics & (2018-03-14). "The case for fairer algorithms - DeepMind Ethics & Society". Medium. Retrieved 2019-07-22.
  31. "5 unexpected sources of bias in artificial intelligence". TechCrunch. Retrieved 2019-07-22.
  32. Knight, Will. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Retrieved 2019-07-22.
  33. Villasenor, John (2019-01-03). "Artificial intelligence and bias: Four key challenges". Brookings. Retrieved 2019-07-22.
  34. Lohr, Steve (2018-02-09). "Facial Recognition Is Accurate, if You're a White Guy". The New York Times. ISSN 0362-4331. Retrieved 2019-05-29.
  35. Koenecke, Allison; Nam, Andrew;Lake, Emily; Nudell, Joe; Quartey, Minnie; Mengesha, Zion; Toups, Connor; Rickford, John R.; Jurafsky, Dan; Goel, Sharad; "Racial disparities in automated speech recognition" Proceedings of the National Academy of Sciences 117 (14) 7684-7689; DOI: 10.1073/pnas.1915768117
  36. "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 2018-10-10. Retrieved 2019-05-29.
  37. Friedman, Batya; Nissenbaum, Helen (1996-07-01). "Bias in computer systems". ACM Transactions on Information Systems. 14 (3): 330–347. doi:10.1145/230538.230561. ISSN 1046-8188.
  38. "Eliminating bias in AI". techxplore.com. Retrieved 2019-07-26.
  39. Olson, Parmy. "Google's DeepMind Has An Idea For Stopping Biased AI". Forbes. Retrieved 2019-07-26.
  40. "Machine Learning Fairness | ML Fairness". Google Developers. Retrieved 2019-07-26.
  41. "AI and bias - IBM Research - US". www.research.ibm.com. Retrieved 2019-07-26.
  42. Bender, Emily M.; Friedman, Batya (December 2018). "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science". Transactions of the Association for Computational Linguistics. 6: 587–604. doi:10.1162/tacl_a_00041. ISSN 2307-387X.
  43. Knight, Will. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Retrieved 2019-07-26.
  44. Davies, Alex (2016-02-29). "Google's Self-Driving Car Caused Its First Crash". Wired. ISSN 1059-1028. Retrieved 2019-07-26.
  45. "List of self-driving car fatalities", Wikipedia, 2019-06-05, retrieved 2019-07-26
  46. Levin, Sam; Wong, Julia Carrie (2018-03-19). "Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian". The Guardian. ISSN 0261-3077. Retrieved 2019-07-26.
  47. "Who is responsible when a self-driving car has an accident?". Futurism. Retrieved 2019-07-26.
  48. Radio, Business; Policy, Law and Public; Podcasts; America, North. "Autonomous Car Crashes: Who - or What - Is to Blame?". Knowledge@Wharton. Retrieved 2019-07-26.
  49. Delbridge, Emily. "Driverless Cars Gone Wild". The Balance. Retrieved 2019-05-29.
  50. Maxmen, Amy (2018-10-24). "Self-driving car dilemmas reveal that moral choices are not universal". Nature. 562 (7728): 469–470. doi:10.1038/d41586-018-07135-0. PMID 30356197.
  51. "Regulations for driverless cars". GOV.UK. Retrieved 2019-07-26.
  52. "Automated Driving: Legislative and Regulatory Action - CyberWiki". cyberlaw.stanford.edu. Retrieved 2019-07-26.
  53. "Autonomous Vehicles | Self-Driving Vehicles Enacted Legislation". www.ncsl.org. Retrieved 2019-07-26.
  54. Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  55. Robot Three-Way Portends Autonomous Future, By David Axe wired.com, August 13, 2009.
  56. United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  57. New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog), dailytech.com, February 17, 2009.
  58. Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  59. https://search.proquest.com/docview/1372020233
  60. Mitra, Ambarish. "We can train AI to identify good and evil, and then use it to teach us morality". Quartz. Retrieved 2019-07-26.
  61. "AI Principles". Future of Life Institute. Retrieved 2019-07-26.
  62. Zach Musgrave and Bryan W. Roberts (2015-08-14). "Why Artificial Intelligence Can Too Easily Be Weaponized - The Atlantic". The Atlantic.
  63. Cat Zakrzewski (2015-07-27). "Musk, Hawking Warn of Artificial Intelligence Weapons". WSJ.
  64. GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015.
  65. Anderson. "Machine Ethics". Retrieved 27 June 2011.
  66. Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.CS1 maint: ref=harv (link)
  67. Anderson, Michael; Anderson, Susan Leigh, eds. (July–August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–63. doi:10.1109/mis.2006.70. ISSN 1541-1672. Archived from the original on 2011-11-26.
  68. Anderson, Michael; Anderson, Susan Leigh (Winter 2007). "Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine. 28 (4): 15–26. ISSN 0738-4602.
  69. Boyles, Robert James M. (October 2017). "Philosophical Signposts for Artificial Moral Agent Frameworks" (PDF). Suri. 6 (2): 92–109.
  70. Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 978-0-553-38256-3.CS1 maint: ref=harv (link)
  71. Bryson, Joanna; Diamantis, Mihailis; Grant, Thomas (September 2017). "Of, for, and by the people: the legal lacuna of synthetic persons". Artificial Intelligence and Law. 25 (3): 273–291. doi:10.1007/s10506-017-9214-9.
  72. "Principles of robotics". UK's EPSRC. September 2010. Retrieved 10 January 2019.
  73. Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
  74. Science New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog), dailytech.com, February 17, 2009.
  75. Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  76. AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  77. Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  78. The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
  79. Article at Asimovlaws.com Archived May 24, 2012, at the Wayback Machine, July 2004, accessed 7/27/09.
  80. Al-Rodhan, Nayef (2015-08-12). "The Moral Code". Foreign Affairs. ISSN 0015-7120. Retrieved 2018-01-23.
  81. Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9.CS1 maint: ref=harv (link)
  82. Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
  83. Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences".
  84. Howard, Ayanna. "The Regulation of AI – Should Organizations Be Worried? | Ayanna Howard". MIT Sloan Management Review. Retrieved 2019-08-14.
  85. Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics". In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  86. Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence". In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  87. "Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem". WIRED. 2014-12-04. Retrieved 2015-11-04.
  88. Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI". In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  89. Russell, Stuart (October 8, 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  90. "Partnership on Artificial Intelligence to Benefit People and Society". N.p., n.d. 24 October 2016.
  91. Fiegerman, Seth. "Facebook, Google, Amazon Create Group to Ease AI Concerns". CNNMoney. n.d. 4 December 2016.
  92. "Ethics guidelines for trustworthy AI". Shaping Europe’s digital future - European Commission. European Commission. 2019-04-08. Retrieved 2020-02-20.
  93. White Paper on Artificial Intelligence: a European approach to excellence and trust. Brussels: European Commission. 2020.
  94. "CCC Offers Draft 20-Year AI Roadmap; Seeks Comments". HPCwire. 2019-05-14. Retrieved 2019-07-22.
  95. "Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog". Retrieved 2019-07-22.
  96. Cave, Stephen; Dihal, Kanta (2020-08-06). "The Whiteness of AI". Philosophy & Technology. doi:10.1007/s13347-020-00415-6. ISSN 2210-5441.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.