[
CogSci Summaries home |
UP |
email
]
http://www.jimdavies.org/summaries/
Parker, L. E. (1998). ALLIANCE: An architecture for fault tolerant
multirobot cooperation. IEEE Transactions on Robotics and
Automaton. v14, no. 2, April 1998.
@Article{,
author = {Lynne E. Parker},
title = {ALLIANCE: An architecture for fault tolerant
multirobot cooperation.},
journal = {IEEE Transactions on Robotics and
Automaton},
year = {1998},
OPTvolume = {14},
OPTnumber = {2},
OPTpages = {220--240},
OPTmonth = {April},
}
Author of the summary: Jim R. Davies, 2000, jim@jimdavies.org
Cite this paper for:
- Advantages of robot teams
- What researchers really need to focus on is fault tolerance and
adaptivity. [p221]
- SYSTEM: ALLIANCE
- Cooperative robotics has two sides: swarm type cooperation and
"intentional" cooperation.
- Swarm: many homogeneous limited ability robots. Influenced by biology,
sociology. Good for non-time critical applications. Depends on
emergent properties.
- Distributed artificial intelligence (DAI)
- behavior-based framework
- Tasks are performed only so long as
they can positively affect the world. Deals with failures this way.
- two types of internal motivations: impatience and acquiescence.
- negotiation scheme: No central executive. Robots send out tasks,
robots respond with bids. Broadcaster selects someone to work on
it. That selected agent can recruit others if needed. [p226]
- negotiation schemes have not been shown to work in situated
agents in a dynamic environment.
Advantages of robot teams:
- cheaper to make a bunch of simpler robots than to make one big
one.
- Capabilities necessary for a task may be too extensive a design
for on robot.
- Robustness due to redundancy and parallelism
Challenges:
- problem allocation
- communication
- coherent action
- recognition and reconciliation of conflict
What researchers really need to focus on is fault tolerance and
adaptivity. [p221] In this article the fault-tolerance is the
reallocation of tasks. Adaptivity is changing behavior in a dynamic
environment.
Cooperative robotics has two sides: swarm type cooperation and
"intentional" cooperation. This paper deals with the second.
Swarm: many homogeneous limited ability robots. Influenced by biology,
sociology. Good for non-time critical applications. Depends on
emergent properties.
Some swarm systems:
- deneubourg et al: distributed sorting simulation
- Theraulaz et al: gets foraging strategies by studying wasps
- Steels: studies several systems for other planetary rock
collection
- Drogoul & Ferber: foraging and chain making
- Mataric: dispersion, aggregation, flocking in physical robots
- Beni & Wang: "generation of arbitrary patterns in cyclicar
cellular robotics"
- Kube & Zhang: locating and pushing a box, control strategy
- Stilwell & Bay: collective transport of load using force sensors
- Arkin et al. sensing, communicatoin, social org for foraging.
- CEBOT: swarms using heterogeneous robots
"Intentional" systems. Efficiency constraint, needs directed
cooperation.
- Noreils: sense-model-plan-act architecture. 3 control layers
(planner, control, and functional). Box pushing. Leader bot and
folower bot.
- Caloud et al: liek above but has task planner, task allocator,
motion planner, execution monitor. Goals come from other members
or env. Use petri nets for plan decomposition.
- Asama et al: ACTRESS. communication, task assignment, path
planning. negotiation and help recruitment. box pushing.
- Wang: task allocation when more than one robot can do a given
task. Sign-board for communication. Distributed leader finding.
- Cohen et al: hierarchical subdivision of authority for fire
fighting. system: Phoenix. Has central executive.
Approaches using the sense-model-plan-act architecture fail to do
real-time performance in a dymanic world.
ALLIANCE
assumptions [p223]
- detection of action effect
- detection of activities that the robot can do
- robots don't lie
- communication not guaranteed available
- imperfect sensors and operations.
- any subsystem can fail
- failures are not always communicated
- no centralized store of world knowledge, no central exectutive
competences: lower-level behaviors [p224]
In a behavior-based framework, competences (corresponding to primitive
survival behaviors like obstacle avoidance) are activated more or less
directly by sensory input.
Behavior sets: ALLIANCE uses them. They are grouped and can be
activated or inhibited together. The set cooresponds to a high-level
behavior. only one is active at a time. low level competences may be
constantly activated.
Goal selection happens through motivational behaviors. Each
corresponds to a behavior set. Tasks are performed only so long as
they can positively affect the world. [p225] It uses this to deal with
failures.
The highest activated behavior set is the one used, as long as it is
over threshold. What goes into that activation: (sensory feedback,
inter-robot communication, inhibitory feedback (from other active
behaviors), and internal motivations.
two types of internal motivations: impatience and acquiescence.
Impatience. Handle situation where other robot fails. Motivation for a
behavior increaces as others fail to do the job. A robot *trying* to
do the job will satisfy impatience for a time. In ALLIANCE, robots
tell each other what they are doing.
Acquiescence. Handle situation where self fails.
Acquiescence increases as you work on a task but nothing is
happpening. Makes you more willing to give up the task.
The system learns as well, finding different levels of mpatience and
acquiescence for different contexts. [p226]
Comparison to DAI negotiation schemes:
negotiation scheme: No central executive. Robots send out tasks,
robots respond with bids. Broadcaster selects someone to work on
it. That selected agent can recruit others if needed.
negotiation schemes have not been shown to work in situated agents in
a dynamic environment. they do not take into account failures of
communication or task doing. They assume an assigned task will be
accomplished.
There is a discussion of ALLIANCE's formal model.
[p228]
To update the parameters, it takes into account observations,
evaluations, and execution time of a team member performing a
task. Changes in the env while a robot is performing are assumed to be
caused by that robot.
[p230]
Experiments were run using a laboratory hazardous waste cleanup
situation. 3 is-robotics R-2 robots. The robots are told qualitatively
where the spills are (upper right part of the room) and where to move
them to. They had behavior-sets find-locations-methodical,
find-locations-wander, move-spill (loc) and report-progress.
As an example of learning, the blue robot wouldn't do methodical
finding because it had learned from previous tries that this wouldn't
work (it's side sensor doesn't work.)
Summary author's notes:
Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies
(jim@jimdavies.org)
Last modified: Mon Mar 27 18:47:59 EST 2000