Timeline of artificial intelligence

Pre-20th century

Date Development
Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).[1]
Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.[2]
10th century BC Yan Shi presented King Mu of Zhou with mechanical men.[3]
384 BC–322 BC Aristotle described the syllogism, a method of formal, mechanical thought and theory of knowledge in The Organon.[4][5]
1st century Heron of Alexandria created mechanical men and other automatons.[6]
260 Porphyry of Tyros wrote Isagogê which categorized knowledge and logic.[7]
~800 Geber developed the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[8]
1206 Al-Jazari created a programmable orchestra of mechanical human beings.[9]
1275 Ramon Llull, Spanish theologian, invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.[10]
~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[11]
~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[12]
Early 17th century René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[13]
1620 Sir Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his work The New Organon, a play on Aristotle's title The Organon.[14][15]
1623 Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and René Grillet).[16]
1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[17][18]
1642 Blaise Pascal invented the mechanical calculator,[19] the first digital calculating machine.[20]
1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[21]
1726 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[22] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism.
1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[23]
1769 Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk.[24] The Turk was later shown to be a hoax, involving a human chess player.
1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[25]
1822–1859 Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[26]
1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics.[27]
1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[28]
1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[29]

20th century

1901–1950

Date Development
1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic.
1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista, and published speculation about thinking and automata.[30]
1923 Karel Čapek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[31]
1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap led philosophy into logical analysis of knowledge. Alonzo Church developde Lambda Calculus to investigate computability using recursive functional notation.
1931 Kurt Gödel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science".
1940 Edward Condon displays Nimatron, a digital computer that played Nim perfectly.
1941 Konrad Zuse built the first working program-controlled computers.[32]
1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.[33]
Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948.
1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern.
Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities.
1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.

1950s

Date Development
1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.[34]
Claude Shannon published a detailed analysis of chess playing as search.
Isaac Asimov published his Three Laws of Robotics.
1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1952–1962 Arthur Samuel (IBM) wrote the first game-playing program,[35] for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[36]
1956 The Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. McCarthy coins the term artificial intelligence for the conference.[37]
The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now Carnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim.
1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.
Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases.
Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence.
1959 The General Problem Solver (GPS) was created by Newell, Shaw and Simon while at CMU.
John McCarthy and Marvin Minsky founded the MIT AI Lab.
Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation.

1960s

Date Development
1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1960 Man-Computer Symbiosis by J.C.R. Licklider.
1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level.
In Minds, Machines and Gödel, John Lucas[38] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
Unimation's industrial robot Unimate worked on a General Motors automobile assembly line.
1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.
Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.
1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly.
Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
1965 Lotfi Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic "Fuzzy Sets" (Information and Control 8: 338–353).
J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language.
Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets.
Machine Intelligence[39] workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others.
Negative report on machine translation kills much work in Natural language processing (NLP) for many years.
Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning.
1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics.
Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play.
Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor.
1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving.
Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge.
First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford.
Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure, and This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below).
McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".

1970s

Date Development
Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI.
1970 Seppo Linnainmaa publishes the reverse mode of automatic differentiation. This method became later known as backpropagation, and is heavily used to train artificial neural networks.
Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge.
Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding.
Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
Work on the Boyer-Moore theorem prover started in Edinburgh.[40]
1972 Prolog programming language developed by Alain Colmerauer.
Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS.
1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.)
The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan.
Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together.
The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal.
Mid-1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing.
David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.
1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures).
Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
1978 Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program.
Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing".
The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.
1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells".
Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge.
Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming.
The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion (in part via luck).
Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.

1980s

Date Development
1980s Lisp machines developed and marketed. First expert system shells and commercial applications.
1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford.
1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation)
1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism.
1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program).
James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events.
Mid-1980s Neural Networks become widely used with the Backpropagation algorithm, also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970 and applied to neural networks by Paul Werbos.
1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
1986 The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets.
Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.[41]
1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).[42]
Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI.
Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[43]
1989 The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[44]
Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network).

1990s

Date Development
1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[45]
1992 Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot Telepresence ROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.[46]
1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second).
Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years.
ISX corporation wins "DARPA contractor of the year"[47] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[48]
1994 Lotfi Zadeh at U.C. Berkeley creates "soft computing"[49] and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84).
With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever.
Cindy Mason at NASA organizes the First AAAI Workshop on AI and the Environment.[50]
1995 Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment.[51]
"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). Throttle and brakes were controlled by a human driver.[52][53]
One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes.
1997 The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov.
First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.
Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 6–0.
1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment.
Tim Berners-Lee published his Semantic Web Road map paper.[54]
Ulises Cortés and Miquel Sànchez-Marrè organize the first Environment and AI Workshop in Europe ECAI, "Binding Environmental Sciences and Artificial Intelligence."[55][56]
Leslie P. Kaelbling, Michael Littman, and Anthony Cassandra introduce POMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and automated planning and scheduling[57]
1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous.
Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web.
Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab.
Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.

21st century

2000s

Date Development
2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers.
Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions.
The Nomad robot explores remote regions of Antarctica looking for meteorite samples.
2002 iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles.
2004 OWL Web Ontology Language W3C Recommendation (10 February 2004).
DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.
NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars.
2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings.
Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions.
Blue Brain is born, a project to simulate the brain at molecular detail.[58]
2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (14–16 July 2006)
2007 Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[59]
Checkers is solved by a team of researchers at the University of Alberta.
DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment.
2008 Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".[60]
2009 Google builds autonomous car.[61]

2010s

Date Development
2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[62][63]
2011 Mary Lou Maher and Doug Fisher organize the First AAAI Workshop on AI and Sustainability.[64]
IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings.
2011–2014 Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions.
2013 Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPA’s Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[65]
NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[66]
2015 An open letter to ban development and use of autonomous weapons signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[67]
Google DeepMind's AlphaGo (version: Fan)[68] defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0.[69]
2016 Google DeepMind's AlphaGo (version: Lee)[68] defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[70]
2017 Asilomar Conference on Beneficial AI was held, to discuss AI ethics and how to bring about beneficial AI while avoiding the existential risk from artificial general intelligence.
Deepstack[71] is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit poker. Soon after, the poker AI Libratus by different research group individually defeated each of its 4 human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample.[72] In contrast to Chess and Go, Poker is an imperfect information game.[73]
Google DeepMind's AlphaGo (version: Master)[68] won 60–0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie.[73]
A propositional logic boolean satisfiability problem (SAT) solver proves a long-standing mathematical conjecture on Pythagorean triples over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers.[74]
An OpenAI-machined learned bot played at The International 2017 Dota 2 tournament in August 2017. It won during a 1v1 demonstration game against professional Dota 2 player Dendi.[75]
Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer tensor processing units (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master).[68] Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11.[68] Although unsupervised learning is a step forward, much has yet to be learned about general intelligence.[76] AlphaZero masters chess in 4 hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw.
2018 Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[77]
The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a brain drain of talent, along the lines of CERN after World War II.[78]
Announcement of Google Duplex, a service to allow an AI assistant to book appointments over the phone. The LA Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.[79]
2020 2020 DeepSpeed is Microsoft's deep learning optimization library for PyTorch that runs T-NLG.[80]
In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was then the "largest language model ever published at 17 billion parameters."[81]
OpenAI's GPT-3, a state-of-the-art autoregressive language model that uses deep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. The It was introduced in May 2020,[82] and was in beta testing in June 2020.
gollark: We already have server with an aura of exclusive mystery, but that might not be the same.
gollark: No idea. I could add baidicoot and such, but they're inconsistently on.
gollark: It is not whatsoever or in any way pizza.
gollark: Consume bees.
gollark: What about 2027? That's in 5 years.

See also

Notes

  1. McCorduck 2004, pp. 4–5
  2. McCorduck (2004, pp. 5–9)
  3. Needham! 1986, p. 53
  4. Richard McKeon, ed. (1941). The Organon. Random House with Oxford University Press.
  5. Giles, Timothy (2016). "Aristotle Writing Science: An Application of His Theory". Journal of Technical Writing and Communication. 46: 83–104. doi:10.1177/0047281615600633.
  6. McCorduck 2004, p. 6
  7. Russell & Norvig 2003, p. 366
  8. O'Connor, Kathleen Malone (1994), The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam, University of Pennsylvania, retrieved 10 January 2007.
  9. A Thirteenth Century Programmable Robot Archived 19 December 2007 at the Wayback Machine
  10. McCorduck 2004, pp. 10–12, 37
  11. McCorduck, pp. 13–14
  12. McCorduck, pp. 14–15, Buchanan 2005, p. 50
  13. McCorduck, pp. 36–40
  14. Sir Francis Bacon (1620). The New Organon: Novem Organum Scientiarum.
  15. Sir Francis Bacon (2000). Francis Bacon: The New Organon (Cambridge Texts in the History of Philosophy). Cambridge University Press.
  16. Please see Mechanical calculator#Calculating clocks: unsuccessful mechanical calculators
  17. Hubert Dreyfus, What Computers Can't Do
  18. McCorduck 2004, p. 42
  19. Please see: Pascal's calculator#Pascal versus Schickard
  20. McCorduck 2004, p. 26
  21. McCorduck 2004, pp. 41–42
  22. Quoted in McCorduck 2004, p. 317
  23. McCorduck 2004, pp. 43
  24. McCorduck 2004, p. 17
  25. McCorduck 2004, pp. 19–25
  26. McCorduck, pp. 26–34
  27. Cambier, Hubert (June 2016). "The Evolutionary Meaning of World 3". Philosophy of the Social Sciences. 46 (3): 242–264. doi:10.1177/0048393116641609. ISSN 0048-3931.
  28. McCorduck 2004, pp. 48–51
  29. Project Gutenberg eBook Erewhon by Samuel Butler.Poes.....
  30. McCorduck 2004, pp. 59–60
  31. McCorduck 2004, p. 25
  32. McCorduck 2004, pp. 61–62 and see also The Life and Work of Konrad Zuse
  33. McCorduck 2004, pp. 55–56
  34. Crevier 1993:22–25
  35. Samuel 1959
  36. Schaeffer, Jonathan. One Jump Ahead:: Challenging Human Supremacy in Checkers, 1997,2009, Springer, ISBN 978-0-387-76575-4. Chapter 6.
  37. Novet, Jordan (17 June 2017). "Everyone keeps talking about A.I.—here's what it really is and why it's so hot now". CNBC. Retrieved 16 February 2018.
  38. "Minds, Machines and Gödel". Users.ox.ac.uk. Retrieved 24 November 2008.
  39. http://www.cs.york.ac.uk/mlg/MI/mi.html
  40. "The Boyer-Moore Theorem Prover". Retrieved 15 March 2015.
  41. Grosz, Barbara; Sidner, Candace L. (1986). "Attention, Intentions, and the Structure of Discourse". Computational Linguistics. 12 (3): 175–204. Retrieved 5 May 2017.
  42. Harry Henderson (2007). "Chronology". Artificial Intelligence: Mirrors for the Mind. NY: Infobase Publishing. ISBN 978-1-60413-059-1.
  43. "EmeraldInsight". Retrieved 15 March 2015.
  44. Mead, Carver A.; Ismail, Mohammed (8 May 1989). Analog VLSI Implementation of Neural Systems (PDF). The Kluwer International Series in Engineering and Computer Science. 80. Norwell, MA: Kluwer Academic Publishers. doi:10.1007/978-1-4613-1639-8. ISBN 978-1-4613-1639-8.
  45. DART: Revolutionizing Logistics Planning
  46. From Antarctica to space: use of telepresence and virtual reality in control of a remote underwater vehicle
  47. "ISX Corporation". Archived from the original on 5 September 2006. Retrieved 15 March 2015.
  48. "DART overview".
  49. Zadeh, Lotfi A., "Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84.
  50. http://www.aiandenvironment.org/aaai-first-ai-env-workshop.html
  51. http://www.aiandenvironment.org/ijcai-first-ai-env-workshop.html
  52. Jochem, Todd M.; Pomerleau, Dean A. "No Hands Across America Home Page". Retrieved 20 October 2015.
  53. Jochem, Todd. "Back to the Future: Autonomous Driving in 1995". Robotic Trends. Retrieved 20 October 2015.
  54. "Semantic Web roadmap". W3.org. Retrieved 24 November 2008.
  55. Kaelbling, Leslie Pack; Littman, Michael L; Cassandra, Anthony R. (1998). "Planning and acting in partially observable stochastic domains" (PDF). Artificial Intelligence. 101 (1–2): 99–134. doi:10.1016/s0004-3702(98)00023-x. Retrieved 5 May 2017.
  56. "Bluebrain – EPFL". bluebrain.epfl.ch.
  57. "Modelling natural action selection". Pubs.royalsoc.ac.uk. Retrieved 24 November 2008.
  58. "Giving Robots Compassion, C. Mason, Conference on Science and Compassion, Poster Session, Telluride, Colorado, 2012". ResearchGate. Retrieved 17 July 2019.
  59. Fisher, Adam. "Inside Google's Quest To Popularize Self-Driving Cars". Popular Science. Bonnier Corporation. Retrieved 10 October 2013.
  60. "Jamie Shotton at Microsoft Research". Microsoft Research.
  61. "Human Pose Estimation for Kinect – Microsoft Research".
  62. http://dts-web1.it.vanderbilt.edu/~fisherdh//AI-Design-Sustainability.html
  63. "DARPA Robotics Challenge Trials". US Defense Advanced Research Projects Agency. Archived from the original on 11 June 2015. Retrieved 25 December 2013.
  64. "Carnegie Mellon Computer Searches Web 24/7 To Analyze Images and Teach Itself Common Sense".
  65. Tegmark, Max. "Open Letter on Autonomous Weapons". Future of Life Institute. Retrieved 25 April 2016.
  66. Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Fan, Hui; Sifre, Laurent; Driessche, George van den; Graepel, Thore; Hassabis, Demis (19 October 2017). "Mastering the game of Go without human knowledge". Nature. 550 (7676): 354–359. doi:10.1038/nature24270. ISSN 0028-0836. PMID 29052630.
  67. Hassabis, Demis. "AlphaGo: using machine learning to master the ancient game of Go". Google Blog. Retrieved 25 April 2016.
  68. Ormerod, David. "AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match". Go Game Guru. Retrieved 25 April 2016.
  69. Moravčík, Matej; Schmid, Martin; Burch, Neil; Lisý, Viliam; Morrill, Dustin; Bard, Nolan; Davis, Trevor; Waugh, Kevin; Johanson, Michael; Bowling, Michael (5 May 2017). "DeepStack: Expert-level artificial intelligence in heads-up no-limit poker". Science. 356 (6337): 508–513. arXiv:1701.01724. doi:10.1126/science.aam6960. ISSN 0036-8075. PMID 28254783.
  70. "Libratus Poker AI Beats Humans for $1.76m; Is End Near?". PokerListings. 30 January 2017. Retrieved 16 March 2018.
  71. Solon, Olivia (30 January 2017). "Oh the humanity! Poker computer trounces humans in big step for AI". the Guardian. Retrieved 19 March 2018.
  72. "The Science of Brute Force". ACM Communications. August 2017.
  73. "Dota 2". 11 August 2017.
  74. Greenemeier, Larry (18 October 2017). "AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor". Scientific American.
  75. Alibaba's AI Outguns Humans in Reading Test. 15 January 2018
  76. Sample, Ian (23 April 2018). "Scientists plan huge European AI hub to compete with US". The Guardian (US ed.). Retrieved 23 April 2018.
  77. Pierson, David (2018). "Should people know they're talking to an algorithm? After a controversial debut, Google now says yes". latimes.com. Retrieved 17 May 2018.
  78. "Microsoft Updates Windows, Azure Tools with an Eye on The Future". PCMag UK. 22 May 2020.
  79. Sterling, Bruce (13 February 2020). "Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)". Wired. ISSN 1059-1028. Retrieved 31 July 2020.
  80. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla (22 July 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.