Symbolic artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Just added links to Wikipedia pages for various kinds of AI that symbolic AI has contributed key ideas to.
Reorganization to draw together history sections and expand the timeline through today, using dates and titles from Kautz and the History of AI Wikipedia article. See Talk Page.
Line 8: Line 8:


[[John Haugeland]] gave the name '''GOFAI''' ("Good Old-Fashioned Artificial Intelligence") to symbolic AI in his 1985 book ''Artificial Intelligence: The Very Idea'', which explored the philosophical implications of artificial intelligence research. In [[robotics]] the analogous term is '''GOFR''' ("Good Old-Fashioned Robotics").{{sfn|Haugeland|1985}}
[[John Haugeland]] gave the name '''GOFAI''' ("Good Old-Fashioned Artificial Intelligence") to symbolic AI in his 1985 book ''Artificial Intelligence: The Very Idea'', which explored the philosophical implications of artificial intelligence research. In [[robotics]] the analogous term is '''GOFR''' ("Good Old-Fashioned Robotics").{{sfn|Haugeland|1985}}
'''Subsymbolic artificial intelligence''' is the set of alternative approaches which do not use explicit high level symbols, such as [[mathematical optimization]], [[statistical classification|statistical classifiers]] and [[neural networks]].{{sfn|Nilsson|1998|p=7}}


<!-- Optimism / AGI -->
<!-- Optimism / AGI -->
Line 20: Line 18:
* "A physical symbol system has the necessary and sufficient means of general intelligent action."
* "A physical symbol system has the necessary and sufficient means of general intelligent action."


== Dominant paradigm 1955-1990 ==
== A Short History ==

We include a short history of symbolic AI to the present day below. Time periods and titles are drawn from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial Lecture{{Cite| publisher = arXiv| last1 = Garcez| first1 = Artur d'Avila| last2 = Lamb| first2 = Luis C.| title = Neurosymbolic AI: The 3rd Wave| accessdate = 2022-07-06| date = 2020-12-16| url = http://arxiv.org/abs/2012.05876}}
and the longer Wikipedia article on the [[History of AI]], with dates and titles differing slightly for increased clarity.

===The First AI Summer: Irrational Exuberance, 1948-1966 ===

The first symbolic AI program was the [[Logic theorist]], written by [[Allen Newell]], [[Herbert A. Simon|Herbert Simon]] and [[Cliff Shaw]] in 1955–56.
The first symbolic AI program was the [[Logic theorist]], written by [[Allen Newell]], [[Herbert A. Simon|Herbert Simon]] and [[Cliff Shaw]] in 1955–56.


During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in small demonstration programs. AI research was centered in three institutions in the 1960s: [[Carnegie Mellon University]], [[Stanford]], [[MIT]] and (later) [[University of Edinburgh]]. Each one developed its own style of research. Earlier approaches based on [[cybernetics]] or [[artificial neural network]]s were abandoned or pushed into the background.
During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in small demonstration programs. AI research was centered in three institutions in the 1960s: [[Carnegie Mellon University]], [[Stanford]], [[MIT]] and (later) [[University of Edinburgh]]. Each one developed its own style of research. Earlier approaches based on [[cybernetics]] or [[artificial neural network]]s were abandoned or pushed into the background.


===Cognitive simulation===
====Cognitive simulation====
[[Herbert A. Simon|Herbert Simon]] and [[Allen Newell]] studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as [[cognitive science]], [[operations research]] and [[management science]]. Their research team used the results of [[psychology|psychological]] experiments to develop programs that simulated the techniques that people used to solve problems.{{sfn||McCorduck|2004|pp=139–179, 245–250, 322–323 (EPAM)}}{{sfn|Crevier|1993|pp=145–149}} This tradition, centered at [[Carnegie Mellon University]] would eventually culminate in the development of the [[Soar (cognitive architecture)|Soar]] architecture in the middle 1980s.{{sfn|McCorduck|2004|pp=450–451}}{{sfn|Crevier|1993|pp=258–263}}
[[Herbert A. Simon|Herbert Simon]] and [[Allen Newell]] studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as [[cognitive science]], [[operations research]] and [[management science]]. Their research team used the results of [[psychology|psychological]] experiments to develop programs that simulated the techniques that people used to solve problems.{{sfn||McCorduck|2004|pp=139–179, 245–250, 322–323 (EPAM)}}{{sfn|Crevier|1993|pp=145–149}} This tradition, centered at [[Carnegie Mellon University]] would eventually culminate in the development of the [[Soar (cognitive architecture)|Soar]] architecture in the middle 1980s.{{sfn|McCorduck|2004|pp=450–451}}{{sfn|Crevier|1993|pp=258–263}}


====Modeling formal reasoning with logic: the “neats"====
===Logic-based===
{{Main|logic programming}}
{{Main|logic programming}}
Unlike Simon and Newell, [[John McCarthy (computer scientist)|John McCarthy]] felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.{{efn|
Unlike Simon and Newell, [[John McCarthy (computer scientist)|John McCarthy]] felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.{{efn|
Line 36: Line 40:
Logic was also the focus of the work at the [[University of Edinburgh]] and elsewhere in Europe which led to the development of the programming language [[Prolog]] and the science of [[logic programming]].{{sfn|Crevier|1993|pp=193–196}}{{sfn|Howe|1994}}
Logic was also the focus of the work at the [[University of Edinburgh]] and elsewhere in Europe which led to the development of the programming language [[Prolog]] and the science of [[logic programming]].{{sfn|Crevier|1993|pp=193–196}}{{sfn|Howe|1994}}


====Modeling implicit common-sense knowledge with frames and scripts: the “scruffies”====
===Anti-logic or "scruffy"===
{{Main|neats vs. scruffies}}
{{Main|neats vs. scruffies}}


Line 42: Line 46:
[[Commonsense knowledge bases]] (such as [[Doug Lenat]]'s [[Cyc]]) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.{{sfn|McCorduck|2004|p=489}}{{sfn|Crevier|1993|pp=239–243}}{{sfn|Russell|Norvig|2003|p=363−365}}
[[Commonsense knowledge bases]] (such as [[Doug Lenat]]'s [[Cyc]]) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.{{sfn|McCorduck|2004|p=489}}{{sfn|Crevier|1993|pp=239–243}}{{sfn|Russell|Norvig|2003|p=363−365}}


=== The first AI winter: crushed dreams, 1967-1977 ===
===Knowledge-based===
When computers with large memories became available around 1970, researchers from all three traditions began to build [[knowledge representation|knowledge]] into AI applications.{{sfn|McCorduck|2004|pp=266–276, 298–300, 314, 421}}{{sfn|Russell|Norvig|2003|pp=22–23}} The knowledge revolution was driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.


=== The second AI Summer: knowledge is power, 1978–1987 ===
==Techniques==
{{Expand-section|search, logic, production rules, semantic nets (& frames, scripts), Chomskian deep structure, etc).|date=September 2021}}
A symbolic AI system can be realized as a microworld, for example [[blocks world]]. The microworld represents the real world in the computer memory. It is described with [[List (abstract data type)|lists]] containing symbols, and the [[intelligent agent]] uses [[Operator (computer programming)|operators]] to bring the system into a new state.{{sfn|Honavar|Uhr|1994|page=6}} The [[production system (computer science)|production system]] is the software which searches in the state space for the next action of the intelligent agent. The symbols for representing the world are grounded with sensory perception. In contrast to neural networks, the overall system works with heuristics, meaning that domain-specific knowledge is used to improve the [[state space search]].


== Success with expert systems 1975–1990 ==
====Knowledge-based systems====
When computers with large memories became available around 1970, researchers from all three traditions began to build [[knowledge representation|knowledge]] into AI applications.{{sfn|McCorduck|2004|pp=266–276, 298–300, 314, 421}}{{sfn|Russell|Norvig|2003|pp=22–23}} The knowledge revolution was driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

==== Success with expert systems ====
{{Main|Expert systems}}
{{Main|Expert systems}}


Line 56: Line 60:
These use a network of [[Production system (computer science)|production rules]]. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. Since symbolic AI works based on set rules and has increasing computing power, it can solve more and more complex problems. In 1996, this allowed [[IBM]]'s [[Deep Blue (chess computer)|Deep Blue]], with the help of symbolic AI, to win in a game of chess against the world champion at that time, [[Garry Kasparov]].<ref>{{Cite web|title=The fascination with AI: what is artificial intelligence?|url=https://www.ionos.com/digitalguide/online-marketing/online-sales/what-is-artificial-intelligence/|access-date=2021-12-02|website=IONOS Digitalguide|language=en}}</ref>
These use a network of [[Production system (computer science)|production rules]]. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. Since symbolic AI works based on set rules and has increasing computing power, it can solve more and more complex problems. In 1996, this allowed [[IBM]]'s [[Deep Blue (chess computer)|Deep Blue]], with the help of symbolic AI, to win in a game of chess against the world champion at that time, [[Garry Kasparov]].<ref>{{Cite web|title=The fascination with AI: what is artificial intelligence?|url=https://www.ionos.com/digitalguide/online-marketing/online-sales/what-is-artificial-intelligence/|access-date=2021-12-02|website=IONOS Digitalguide|language=en}}</ref>


== Subsymbolic AI ==
=== The second AI winter, 1988-1993 ===
{{expand-section|date=October 2021}}
=== Robotics ===
Opponents of the symbolic approach in the 1980s included [[roboticist]]s such as [[Rodney Brooks]], who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and [[computational intelligence]] researchers, who apply techniques such as [[neural network]]s and optimization to solve problems in [[machine learning]] and [[control engineering]].{{citation needed|date=September 2021}}


=== Adding in more rigorous foundations, 1993-2011 ===
=== Uncertain reasoning ===

==== Uncertain reasoning ====
Symbols can be used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using [[artificial neural network]]s.{{sfn|Honavar|1995}}
Symbols can be used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using [[artificial neural network]]s.{{sfn|Honavar|1995}}


== Synthesizing symbolic and subsymbolic ==
=== Deep learning and neurosymbolic AI 2011-now ===

==== Neurosymbolic AI: integrating neural and symbolic approaches ====
{{expand-section|places where symbolic performs better ([[explainable AI]], commonsense knowledge, reliability), and there are many people who agree with Valiant who could be covered|date=September 2021}}
{{expand-section|places where symbolic performs better ([[explainable AI]], commonsense knowledge, reliability), and there are many people who agree with Valiant who could be covered|date=September 2021}}
Recently, there have been structured efforts towards integrating the symbolic and connectionist AI approaches under the umbrella of neural-symbolic computing. As argued by [[Leslie Valiant|Valiant]] and many others,{{sfn|Garcez et al.|2015}} the effective construction of rich computational [[cognitive model]]s demands the combination of sound symbolic reasoning and efficient (machine) learning models.
Recently, there have been structured efforts towards integrating the symbolic and connectionist AI approaches under the umbrella of neural-symbolic computing. As argued by [[Leslie Valiant|Valiant]] and many others,{{sfn|Garcez et al.|2015}} the effective construction of rich computational [[cognitive model]]s demands the combination of sound symbolic reasoning and efficient (machine) learning models.

==Techniques==
{{Expand-section|search, logic, production rules, semantic nets (& frames, scripts), Chomskian deep structure, etc).|date=September 2021}}
A symbolic AI system can be realized as a microworld, for example [[blocks world]]. The microworld represents the real world in the computer memory. It is described with [[List (abstract data type)|lists]] containing symbols, and the [[intelligent agent]] uses [[Operator (computer programming)|operators]] to bring the system into a new state.{{sfn|Honavar|Uhr|1994|page=6}} The [[production system (computer science)|production system]] is the software which searches in the state space for the next action of the intelligent agent. The symbols for representing the world are grounded with sensory perception. In contrast to neural networks, the overall system works with heuristics, meaning that domain-specific knowledge is used to improve the [[state space search]].


== Controversies ==
== Controversies ==
Line 73: Line 82:
{{Main|Dreyfus' critique of AI}}
{{Main|Dreyfus' critique of AI}}
An early critic of symbolic AI was philosopher [[Hubert Dreyfus]]. Beginning in the 1960s [[Dreyfus' critique of AI]] targeted the philosophical foundations of the field in a series of papers and books. He predicted it would only be suitable for [[toy problem]]s, and thought that building more complex systems or scaling up the idea towards useful software would not be possible.{{sfn|Dreyfus|1981|pp=161–204}}
An early critic of symbolic AI was philosopher [[Hubert Dreyfus]]. Beginning in the 1960s [[Dreyfus' critique of AI]] targeted the philosophical foundations of the field in a series of papers and books. He predicted it would only be suitable for [[toy problem]]s, and thought that building more complex systems or scaling up the idea towards useful software would not be possible.{{sfn|Dreyfus|1981|pp=161–204}}

=== Situated Robotics ===
Opponents of the symbolic approach in the 1980s included [[roboticist]]s such as [[Rodney Brooks]], who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and [[computational intelligence]] researchers, who apply techniques such as [[neural network]]s and optimization to solve problems in [[machine learning]] and [[control engineering]].{{citation needed|date=September 2021}}

=== Subsymbolic AI ===
{{expand-section|date=October 2021}}
'''Subsymbolic artificial intelligence''' is the set of alternative approaches which do not use explicit high level symbols, such as [[mathematical optimization]], [[statistical classification|statistical classifiers]] and [[neural networks]].{{sfn|Nilsson|1998|p=7}}


=== Funding and practicality: AI Winters ===
=== Funding and practicality: AI Winters ===

Revision as of 02:28, 13 July 2022

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search.[1] Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.[2][3] Later, starting from about 2012, the spectacular successes of deep learning in handling vision, speech recognition, speech synthesis, image generation, and machine translation were used to argue that the symbolic approach was no longer relevant and both research and commercial funding shifted heavily towards deep learning and away from symbolic AI.[4] Since then, difficulties with bias, explanation, comprehensibility, and robustness have become more apparent with deep learning approaches and there has been a shift to consider combining the best of both the symbolic and neural approaches.[5]

John Haugeland gave the name GOFAI ("Good Old-Fashioned Artificial Intelligence") to symbolic AI in his 1985 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research. In robotics the analogous term is GOFR ("Good Old-Fashioned Robotics").[6]

Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. It was succeeded by highly mathematical statistical AI which is largely directed at specific problems with specific goals, rather than general intelligence. Research into general intelligence is now studied in the exploratory sub-field of artificial general intelligence.[citation needed]

Foundational ideas

The symbolic approach was succinctly expressed in the "physical symbol systems hypothesis" proposed by Newell and Simon in the middle 1960s:

  • "A physical symbol system has the necessary and sufficient means of general intelligent action."

A Short History

We include a short history of symbolic AI to the present day below. Time periods and titles are drawn from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial LectureGarcez, Artur d'Avila; Lamb, Luis C. (2020-12-16), Neurosymbolic AI: The 3rd Wave, arXiv, retrieved 2022-07-06 and the longer Wikipedia article on the History of AI, with dates and titles differing slightly for increased clarity.

The First AI Summer: Irrational Exuberance, 1948-1966

The first symbolic AI program was the Logic theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955–56.

During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in small demonstration programs. AI research was centered in three institutions in the 1960s: Carnegie Mellon University, Stanford, MIT and (later) University of Edinburgh. Each one developed its own style of research. Earlier approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.

Cognitive simulation

Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.[7][8] This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[9][10]

Modeling formal reasoning with logic: the “neats"

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[a] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[14] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[15][16]

Modeling implicit common-sense knowledge with frames and scripts: the “scruffies”

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[17][18][19] found that solving difficult problems in vision and natural language processing required ad hoc solutions—they argued that no simple and general principle (like logic) would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[20][21] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[22][23][24]

The first AI winter: crushed dreams, 1967-1977

The second AI Summer: knowledge is power, 1978–1987

Knowledge-based systems

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[25][26] The knowledge revolution was driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Success with expert systems

This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[27][28][29] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[30] These use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. Since symbolic AI works based on set rules and has increasing computing power, it can solve more and more complex problems. In 1996, this allowed IBM's Deep Blue, with the help of symbolic AI, to win in a game of chess against the world champion at that time, Garry Kasparov.[31]

The second AI winter, 1988-1993

Adding in more rigorous foundations, 1993-2011

Uncertain reasoning

Symbols can be used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using artificial neural networks.[32]

Deep learning and neurosymbolic AI 2011-now

Neurosymbolic AI: integrating neural and symbolic approaches

Recently, there have been structured efforts towards integrating the symbolic and connectionist AI approaches under the umbrella of neural-symbolic computing. As argued by Valiant and many others,[33] the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient (machine) learning models.

Techniques

A symbolic AI system can be realized as a microworld, for example blocks world. The microworld represents the real world in the computer memory. It is described with lists containing symbols, and the intelligent agent uses operators to bring the system into a new state.[34] The production system is the software which searches in the state space for the next action of the intelligent agent. The symbols for representing the world are grounded with sensory perception. In contrast to neural networks, the overall system works with heuristics, meaning that domain-specific knowledge is used to improve the state space search.

Controversies

Philosophical: critiques from Dreyfus and other philosophers

An early critic of symbolic AI was philosopher Hubert Dreyfus. Beginning in the 1960s Dreyfus' critique of AI targeted the philosophical foundations of the field in a series of papers and books. He predicted it would only be suitable for toy problems, and thought that building more complex systems or scaling up the idea towards useful software would not be possible.[35]

Situated Robotics

Opponents of the symbolic approach in the 1980s included roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.[citation needed]

Subsymbolic AI

Subsymbolic artificial intelligence is the set of alternative approaches which do not use explicit high level symbols, such as mathematical optimization, statistical classifiers and neural networks.[36]

Funding and practicality: AI Winters

The same argument was given in the Lighthill report, which started the AI Winter in the mid-1970s.[37]

See also

Notes

  1. ^ McCarthy once said: "This is AI, so we don't care if it's psychologically real".[2] McCarthy reiterated his position in 2006 at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence".[11]. Pamela McCorduck writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplished, and the other aimed at modeling intelligent processes found in nature, particularly human ones."[12], Stuart Russell and Peter Norvig wrote "Aeronautical engineering texts do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool even other pigeons.'"[13]

Citations

  1. ^ Garnelo, Marta; Shanahan, Murray (2019-10-01). "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations". Current Opinion in Behavioral Sciences. 29: 17–23. doi:10.1016/j.cobeha.2018.12.010.
  2. ^ a b Kolata 1982.
  3. ^ Russell & Norvig 2003, p. 5.
  4. ^ Rossi, Francesca. "Thinking Fast and Slow in AI". AAAI. Retrieved 5 July 2022.
  5. ^ Selman, Bart. "AAAI Presidential Address: The State of AI". AAAI. Retrieved 5 July 2022.
  6. ^ Haugeland 1985.
  7. ^ & McCorduck 2004, pp. 139–179, 245–250, 322–323 (EPAM).
  8. ^ Crevier 1993, pp. 145–149.
  9. ^ McCorduck 2004, pp. 450–451.
  10. ^ Crevier 1993, pp. 258–263.
  11. ^ Maker 2006.
  12. ^ McCorduck 2004, pp. 100–101.
  13. ^ Russell & Norvig 2003, pp. 2–3.
  14. ^ McCorduck 2004, pp. 251–259.
  15. ^ Crevier 1993, pp. 193–196.
  16. ^ Howe 1994.
  17. ^ McCorduck 2004, pp. 259–305.
  18. ^ Crevier 1993, pp. 83–102, 163–176.
  19. ^ Russell & Norvig 2003, p. 19.
  20. ^ McCorduck 2004, pp. 421–424, 486–489.
  21. ^ Crevier 1993, p. 168.
  22. ^ McCorduck 2004, p. 489.
  23. ^ Crevier 1993, pp. 239–243.
  24. ^ Russell & Norvig 2003, p. 363−365.
  25. ^ McCorduck 2004, pp. 266–276, 298–300, 314, 421.
  26. ^ Russell & Norvig 2003, pp. 22–23.
  27. ^ Russell & Norvig 2003, pp. 22–24.
  28. ^ McCorduck 2004, pp. 327–335, 434–435.
  29. ^ Crevier 1993, pp. 145–62, 197–203.
  30. ^ Hayes-Roth, Murray & Adelman.
  31. ^ "The fascination with AI: what is artificial intelligence?". IONOS Digitalguide. Retrieved 2021-12-02.
  32. ^ Honavar 1995.
  33. ^ Garcez et al. 2015.
  34. ^ Honavar & Uhr 1994, p. 6.
  35. ^ Dreyfus 1981, pp. 161–204.
  36. ^ Nilsson 1998, p. 7.
  37. ^ Yao et al. 2017.

References