[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/

Maes, P. (1990), Situated Agents Can Have Goals, Robotics and Autonomous Systems, 6:49-70.

@InProceedings{Maes90,
  author =       "Patti Maes",
  title =        "Situated Agents Can Have Goals",
  booktitle =    "Designing Autonomous Agents",
  pages =        "49--70",
  year =         "1990",
  editor =       "Patti Maes",
  publisher =    "MIT Press",
  summary =      "spreading activation networks. runtime arbittraton
                 among actions with respect to goals of system in
                 situation.",
}

Author of the summary: Jim Davies, 2000, jim@jimdavies.org

Cite this paper for:

Summary:

This paper argues that situated agents suffer because they do not have a goal structure and because the action-selection must be pre-compiled. A novel approach is presented where "the action selection is modeled as an emergent property of an activation/inhibition dynamics among actions."

Detailed outline:

The big problem: What does a system do when? To solve this the deliberative thinking paradigm was created, with a symbolic planning system with goals and subgoals. It was found to not work well in complex, dynamic environments due to brittleness, inflexibility and slow response.

This led to the introduction of reactive systems, situated automata, situated agents, interactional systems, routines, subsumption architectures, behavior-based architectures, universal plans, and action networks, to name a bunch. [references for these architectures are in the paper.]

These new architectures were characterized by:

Emergent functionality was an important idea shared by them. This means that the functionality of the agent is not expressed in its behavioral policy, but is only apparent when operating in a complex environment.

In the system presented, activities are not hard wired and are not precompiled. Existing systems have problems with goals:

Action selection should demonstrate: [p52] "The hypothesis we are testing is whether action selection can be modeled as an emergent property of an activation/inhibition dynamics among the different actions the agent can take." [p53] A competence has a condition-list, an add-list (to add facts to the state of the world), a delete-list, and an activation level. A competence module is execuatble if all the preconditions are true. There is an activation threshold.

Competence modules (cms) are linked:

Where activation comes from

Links make cms inhibit and activate each other. The observed situation activate cms with partial matches in the condition slot. Goals are also a source of activation. There are once-only and permanent goals (achieved continuously). Goals activate cms that have that goal in the add-list. Already achieved goals inhibit cms that would undo them. Executable cms spread activation forward through successor links, non-execuatbles spread activation to predecessor links. Conflicters inhibit each other. There is decay as well. [p56] Activation is 1/n where n is the number of propositions in the relevent list. This evens out the amount of activation for cms with more or fewer preconditions or whatever. The effect of a proposition is spread throughout the things it's linked to.

You can change a parameter to make it more goal-oriented and less opportunistic. [p59]

The system shows interesting biases. It favors sticking to one goal because when you act on one line of solution, you get closer to that solution's completion, and the way the activation works, that line of action will win out, generally. [p62]

The "thoughfulness" can be changed by the parameter that determines how long activation spreads before a decision is made. Longer time means looking farther ahead into the future. [p66] A long time makes for a closer-to-optimal solution, but this is no good in a rapidly changing environment. More thoughtfulness is also a speed tradeoff. [p67]

Why won't it run into the same problems as AI planners? [p67]

The system has no variables. This means that goals cannot be specified with variables (go to x.) They get away without them because they focus on the immediate environment, like the-spray-can-in-my-hand. This is a deictic representation.

deictic representation: describe only the immediate environment so that you don't need a new symbol for every thing on earth. (indexical-functional representations.)

limitations:

Summary author's notes:


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Mon Feb 28 15:56:14 EST 2000