AI Now Institute

The AI Now Institute at NYU (AI Now) is a research institute studying the social implications of artificial intelligence. AI Now was founded by Kate Crawford and Meredith Whittaker in 2017 after a symposium hosted by the White House under Barack Obama.[2][3] It is located at New York University. AI Now is partnered with organizations such as the New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU.[4] They produce annual reports that examine the social implications of artificial intelligence.[5] AI Now conducts interdisciplinary research that focuses on four themes:

  • Bias and inclusion
  • Labour and automation
  • Rights and liberties
  • Safety and civil infrastructure
The AI Now Institute
FoundedNovember 15, 2017 (2017-11-15)
FoundersKate Crawford, Meredith Whittaker
Type501(c)(3) Nonprofit organization
Location
Coordinates40.7350°N 73.9948°W / 40.7350; -73.9948
Websitewww.ainowinstitute.org

Founding and Mission

AI Now grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Open Research Group, and Kate Crawford, a principal researcher at Microsoft Research.[6] The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare.[7]

In November 2017, Whittaker and Crawford held a second symposium on AI and social issues, and publicly launched the AI Now Institute in partnership with New York University.[6] It is claimed to be the first university research institute focused on the social implications of AI, and the first AI institute founded and led by women.[1]

In an interview with NPR, Crawford stated that the motivation for founding AI Now was that the application of AI into social domains - such as health care, education, and criminal justice - was being treated as a purely technical problem. The goal of AI Now's research is to treat these as social problems first, and bring in domain experts in areas like sociology, law, and history to study the implications of AI.[8]

Research

Following each symposium, AI Now published an annual report on the state of AI, and its integration into society. Its 2017 Report stated that "current framings of AI ethics are failing", and provided ten strategic recommendations for the field - including pre-release trials of AI systems, and increased research into bias and diversity in the field. The report was noted for calling for an end to "black box" systems in core social domains, such as those responsible for criminal justice, healthcare, welfare, and education.[9][10][11]

In April 2018, AI Now released a framework for algorithmic impact assessments (AIA Report), as a way for governments to assess the use of AI in public agencies. According to AI Now, an AIA would be similar to environmental impact assessment, in that it would require public disclosure and access for external experts to evaluate the effects of an AI system, and any unintended consequences. This would allow systems to be vetted for issues like biased outcomes or skewed training data, which researchers have already identified in algorithmic systems deployed across the country.[12][13][14]

gollark: Funnily enough, the Rust compiler is actually written *in* Rust.
gollark: They're either run by an interpreter of some kind or compiled to machine code to be run by the CPU directly.
gollark: I doubt there are general face-recognition-confusing patterns (other than just faces or something all over your clothes?) rather than ones which confuse specific systems.
gollark: It's very snakey. Reptiles are neat pets.
gollark: Arguably a false negative, since it incorrectly didn't match you.

References

  1. "New Artificial Intelligence Research Institute Launches". NYU Tandon News. 2017-11-25. Retrieved 2018-07-07.
  2. "The field of AI research is about to get way bigger than code". Quartz. 2017-11-15. Retrieved 2018-07-09.
  3. "Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It". Fast Company. 2017-07-25. Retrieved 2018-07-07.
  4. "About". ainowinstitute.org. Retrieved 2018-07-07.
  5. "Research". ainowinstitute.org. Retrieved 2018-07-07.
  6. Ahmed, Salmana. "In Pursuit of Fair and Accountable AI". Omidyar. Retrieved 19 July 2018.
  7. "2016 Symposium". ainowinstitute.org. Archived from the original on 2018-07-20. Retrieved 2018-07-09.
  8. "Studying Artificial Intelligence At New York University". NPR. Retrieved 2018-07-18.
  9. "AI Now 2017 Report" (PDF). AI Now. Retrieved 19 July 2018.
  10. Simonite, Tom (18 Oct 2017). "AI EXPERTS WANT TO END 'BLACK BOX' ALGORITHMS IN GOVERNMENT". Wired. Retrieved 19 July 2018.
  11. Rosenberg, Scott (1 November 2017). "WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT". Wired. Retrieved 19 July 2018.
  12. Gershgorn, Dave (9 April 2018). "AI experts want government algorithms to be studied like environmental hazards". Quartz. Retrieved 19 July 2018.
  13. "AI Now AIA Report" (PDF). AI Now. Retrieved 19 July 2018.
  14. Reisman, Dillon. "Algorithms Are Making Government Decisions. The Public Needs to Have a Say". Medium. ACLU. Retrieved 19 July 2018.

See also

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.