[
CogSci Summaries home |
UP |
email
]
J. Laird, A. Newell, and P. Rosenbloom, SOAR: An Architecture for
General Intelligence. Artificial Intelligence, 33, 1987.
@Article{laird1987,
author = "J. E. Laird and A. Newell and P. S. Rosenbloom",
year = "1987",
title = "{SOAR}: an architecture for general intelligence",
journal = "Artificial Intelligence",
volume = "33",
number = "1",
month = sep,
pages = "1--64",
note = "\iindex{Laird, J.}\iindex{Newell,
A.}\iindex{Rosenbloom, P.}\iindex{Newell, A.}",
}
Author of the summary: J. William Murdock, 1997,
murdock@cc.gatech.edu, Jim R. Davies, 2000, jim@jimdavies.org
Cite this paper for:
- A broad range of reasoning capabilities can be implemented
within a unified framework performing goal directed search through a
problem space using productions.
- SYSTEMS: SOAR
- production system with lookahead [p34]
- physical symbol system hypothesis: A general
intelligence must be realized with a symbol system.
- goal structure hypothesis: control in a general
intelligence is maintained by a symbolic goal system.
- uniform elementary-representation hypothesis: There is a
single elementary representation for declarative knowledge.
- problem space hypothesis: problem spaces are the
fundamental organizational unit of all goal-directed behavior.
- production system hypothesis: Production systems are the
appropriate organization for encoding all long-term knowledge.
- universal-subgoaling hypothesis: Any decision can be an
object of goal-oriented attention.
- automatic-subgoaling hypothesis: All goals arise
dynamically in response to impasses and are generated
automatically by the architecture.
- control-knowledge hypothesis: Any decision can be
controlled by indefinite amounts of knowledge, both domain
dependent and independent.
- weak-method hypothesis: the weak methods form the basic
methods of intelligence.
- weak-method emergence hypothesis: the weak methods arise
directly from the system responding based on its knowledge of
the task.
- uniform learning hypothesis: goal-based chunking is the
general learning mechanism
Keywords: Production, Goal, Chunking, Knowledge Compilation
Summary: Introduces the concept of a general architecture for
reasoning and presents SOAR as one such architecture. Discusses the
range of tasks which SOAR has addressed. Presents the core
commitments of the SOAR philosophy: search through states in a problem
space, universal subgoaling, productions as the only long term memory
elements, the use of explicit preferences to guide processing, the
definition of subgoals as responses to automatically detected
impasses, continuous monitoring of goal termination, emergence of
standard weak methods, and learning as chunking of subgoal
resolutions. Describes the architecture using the eight-puzzle as the
problem space. Details each of the major components: working
(declarative) memory (including the context stack and preferences as
well as normal objects), the processing structure, the subgoaling
mechanism, the built in default knowledge (i.e. weak method
strategies), and the chunking mechanism. Discusses general issues:
scale, weak methods, and learning. Concludes by enumerating and
discussing the fundamental hypotheses of SOAR.
[p2] Soar ancestry: logic theorist, GPS, general theory of human problem
solving, production systems, PSG, PSANLS, OPS.
roots: cognitive architecture concept, instructable production
system, problem spaces.
drawbacks: Soar cannot do the following [p4]
- deliberative planning
- automatic acquisition of new tasks
- creation of new task representations
- extension to additional types of learning
- recovery from learning errors (overgeneralization)
Uses the
problem space hypothesis: The problem space (operators that can change
the current state to yield a new state) is the fundamental
organization of all goal-directed activity.
Goals can be represented with a test procedure or with a goal state. [p5]
Lack of required knowledge leads to subgoaling.[p7]
universal subgoaling: creating a subgoal at any sign of difficulty[p7]
All long term knowledge is represented with a production system.
All productions are fired in parallel. There is no conflict
resolution. Productions add and remove knowledge from working memory (WM).
Preferences favor some productions over others. A decision proceedure
uses these preferences to find the productions that fire.
[p10] Soar can implement weak methods with different productions.
One way in which Soar learns: automatically and permanently caching
the results of subgoals as productions. e.g. when deciding between two
actions, a subgoal is created, and one is chosen. The next time that
happens, productions with preferences are fired, avoiding the impasse
the next time.
There is a context stack [p13] that doubles as a goal stack. Each
level has a context and one goal.
Working memory can be modified thus: [p17]
- productions can add elements
- the decision process can modify the context stack
- the working memory manager removes irrelevent contexts from the
context stack. (information needed by a subgoal that has been
popped [p30])
The decision cycle:
- elaboration: adds new objects, augmentations (attribute
links from objects to values), and preferences
- decision procedure: which slot in the context stack should have
it's context replaced and by what object. Goes from oldest to
newest.
[p29] There are 4 kinds of impasses:
- rejection
- tie: creates a preference: {x better then y, x same quality as
y, or x is worse than y}
- no-change
- conflict
"Impasses are resolved by the addition of preferences that change the
results of the decision procedure." [p30] It makes a new context
and subgoal to do this.
Soar has default, domain-independent knowledge that allows it use the
subgoaling appropriately. This consists of 52 productions, categorized
into these groups:
- common search knowledge: backtracking, which operators to use,
don't use the same operator twice in a row.
- diagnosis of impasses [p32]
- selection of problem space [p33]
- evaluation techniques (like lookahead)
Chunking is substituting efficient productions for complex goal
processing. [p36] It learns to anticipate and avoid impasses that lead
to subgoals.
conclusions: (all quoted form p58)
- physical symbol system hypothesis: A general
intelligence must be realized with a symbol system.
- goal structure hypothesis: control in a general
intelligence is maintained by a symbolic goal system.
- uniform elementary-representation hypothesis: There is a
single elementary representation for declarative knowledge.
- problem space hypothesis: problem spaces are the
fundamental organizational unit of all goal-directed behavior.
- production system hypothesis: Production systems are the
appropriate organization for encoding all long-term knowledge.
- universal-subgoaling hypothesis: Any decision can be an
object of goal-oriented attention.
- automatic-subgoaling hypothesis: All goals arise
dynamically in response to impasses and are generated
automatically by the architecture.
- control-knowledge hypothesis: Any decision can be
controlled by indefinite amounts of knowledge, both domain
dependent and independent.
- weak-method hypothesis: the weak methods form the basic
methods of intelligence.
- weak-method emergence hypothesis: the weak methods arise
directly from the system responding based on its knowledge of
the task.
- uniform learning hypothesis: goal-based chunking is the
general learning mechanism
Summary author's notes:
- Part of this of this summary came from a file which had the following
disclaimer:
"The following summaries are the completely unedited and often
hastily composed interpretations of a single individual without any
sort of systematic or considered review. As such it is very likely
that at least some of the following text is incomplete, inadequate,
misleading, or simply wrong. One might view this as a very
preliminary draft of a survey paper that will probably never be
completed. The author disclaims all responsibility for the accuracy
or use of this document; this is not an official publication of the
Georgia Institute of Technology or the College of Computing thereof,
and the opinions expressed here may not even fully match the fully
considered opinions of the author much less the general opinions of
the aformentioned organizations."
Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies
(
jim@jimdavies.org
)
Last modified: Fri Mar 17 09:16:12 EST 2000