[ CogSci Summaries home | UP | email ]

T. Dean & S. Kambhampati (1996), Planning and Scheduling. The CRC Handbook of Computer Science and Engineering, A. B. Tucker (Ed.), CRC press, 1997. pp614-636.

Author of the summary: Jim Davies, 1999, jim@jimdavies.org

The actual paper in ps form: dean1996.ps. Here is the homepage of T. Dean.

Cite this paper for:

This paper is about planning in AI, which is characterized by complex dynamics, and rich and multi-faceted knowledge. (p3) It is an overview of the concepts and notation of planning, with brief descriptions of heuristics and algorithms. It is a good introduction to the field.

An algorithmic planning system has three parts:

  1. operators
  2. predictive model
  3. evaluator
Scheduling can be cast in terms of:
Satisficing: find a feasible solution
Optimizing: find a solution that minimizes the total time required to complete all jobs (p2)

exogenous events: events outside control of the planner.
state variables: used to describe the state of a system.
state transition function: takes the state and an action and returns the resulting state.

The planner can assume that the plan executor can observe the environment it is in.

If the state transitions involve some randomness, then there are state transition and output conditional probability distributions. This is computationally expensive in theory, but in practice it in not so bad: not all state variables are important for all transitions, and some state variables are independent. (p5)

If a plan is dependent on observations at a current state, then that plan is conditional, and work in a closed loop.. If it is dependent on time, then the plan is time variant, else it is stationary. (p7) Any plan that igores the results of past actions in open loop.

Plans describe state histories. In conditional plans there is a probability distribution over them. Each history gets a real value. You can calculate the expected value of a plan given all of its histories, their probabilities and values, and the cost of getting from one state to another. ?? (p8)

refinement: taking an existing plan and adding detail.
repair: modifying an existing plan for better performance

Complexity

Many scheduling problems are NP hard. Though this can sometimes be avoided by relaxing assumptions, most real scheduling problems are NP hard. (p10) There are algorithms for closed loop, deterministic problems that run in polyonmial time, but polyonmial time is the size of the state space, which is often exponential in the number of state variables. (p11) "The literature on planning and scheduling in artificial intelligence generally takes it on faith that any interesting problem is at least NP-hard." (p11) The reseach emphasis is on heurstics and search algorithms.

The paper defines two operators over state-variable assignments: they look like circles around a + and -, respectively. In this outline I will use + and - to refer to them.

y - v means the set of assignments in y that are not also in v.
y + v returns the union of consistent assignments.

Actions are defined in terms of assignments. thus state transitions are defined as:

f(s,a) = s (if s and the precondition of a are inconsistent, else)
         post(a) + (s - post(a))
a state s satisfies a goal if the state's assignments are a superset of the goal's assignments. You can make partial plans that do a subset of the work, and you can have constraints that apply to that partial plan. You can work forward through the plan (progression) or backwards from the goal (regression). (p17) State different heuristics choose a path based on the difference between the result state and the goal.

means-ends analysis
If the preconditions of a good operator are not met, then you can make those preconditions a subgoal (means ends analysis). (p18)

One problem with this is that premature committment to order of steps can lead to extensive backtracking.

plan space refinement
Satisfy one variable requirement at a time. (p19) A potential problem is that in satisfying one you may undo progress already made. This can be helped by protecting conditions already satisfied. (p20)

task reduction
Break large tasks into subtasks. This helps because you might know that subtask x is unattainable, and thus not waste time working on the details of subtask y. Also breaking the long task into a series of independent shorter tasks reduces the complexity of the search. (p21) Sometimes you can gloss over details of a plan, make a simple plan test and debug it.

constraint satisfaction in scheduling
A scheduling solution can be described as a set of states.(p26) It is often true that order of variables considered (for example start with the variables with the fewest values) makes a big difference in efficiency. One way to solve these problems is to start with an inadequate assignment and repair.

Lin and Kernighan's algorithm uses a local repair method. It works like this: start with a configuration like abcde. Then, for each four-tuple, like abcd, see if it is faster to go acbd. This method actually gets to within 10% of optimality for many practical problems. (p28)

One problem with local repair methods is that they can get to local optima based perhaps on maximizing one of the performance variables. Such solutions are often bad. One way to combat this is with simulated annealing, which is to take a random repair. Doing this might get the algorithm out of a rut.

In case based planning, the planner saves previously made plans and their explanations so that later it can use parts of them when it encounters similar problems. (p29) By analyzing past experience the planner can learn new search control rules.

To evaluate plans stochastic in stochastic domains, you take a random sample of possible plans. The average of many of these samples gives a fairly accurate estimate of the plan quality. (p31)

Summary author's notes:


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies ( jim@jimdavies.org )
Last modified: Mon Apr 5 14:02:29 EDT 1999