[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/

Veloso, M.M. & Carbonell, J.G.(1993). Derivational Analogy in PRODIGY: Automating Case Acquisition, Storage, and Utilization. Machine Learning, 10(3):249-278.

@Article{,
  author = 	 {Manuela M. Veloso, Jaime G. Carbonell},
  title = 	 {Derivational Analogy in PRODIGY: Automating Case Acquisition, Storage, and Utilization},
  journal = 	 {Machine Learning},
  year = 	 {1993},
  OPTvolume = 	 {10},
  OPTnumber = 	 {3},
  OPTpages = 	 {249--278},
}

Author of the summary: Yaxin Liu, 1999, yxliu@cc.gatech.edu

Cite this paper for:



Derivational Analogy in PRODIGY: Automating Case Acquisition, Storage, and
Utilization

Manuela M. Veloso (CMU)
Jaime G. Carbonell (CMU)

Machine Learning 10 (1993), 249-278.

Main points:
  * Derivational analogy makes use of previous problem solving traces to 
    guide later problem solving processes, so it is a type of case-based 
    method.
  * In order to do this, the problem solver and the case memory manager are
    coupled together.  The case memory manager works as in other CBR systems,
    except for that the similarity metric can be changed over time according
    to its utilization in later problem solving.  The problem solver provides 
    cases with justifications annotated, and during reply, check whether the 
    retrieved case can be reused in the new situation.
  * The stored case is the problem solving trace, as well as justifications at 
    each choice point.  The justification stores other alternatives, and
    reasons of choosing the current one, and/or the reasons of not choosing
    other alternatives.
  * To solve a new problem, the case memory manager provides matched cases.
    Such cases are checked by the problem solver, one is chosen as the
    guidance.  The case is then replayed, during the replay, the problem
    solver checks if the new situation is the same as the old.  If so, the
    step is duplicated, otherwise, either to restore the unmatched part to
    reuse the rest past of the case, or to start a completely new problem
    solving starting from the current point.

1. Introduction
   * "Derivational analogy is a general form of case-based reconstructive
     reasoning that replays and modifies past problem-solving traces to solve
     problems more directly in new but similar situations."
   * Problem-solving is a search for a solution where different alternatives
     are generated and explored, based upon a large amount of knowledge.
   * Analogy is to reuse past experience to guide the generation of the
     solution for the new problem, avoiding a completely new search effort.
   * "Derivational analogy aims at capturing that extra amount of knowledge
     present at search time, by compiling the justifications at each decision
     point and annotating these at the different steps of the successful
     path."
   * Issues:
     + how to generate cases automatically (problem solver)
     + how to reuse the cases by replaying traces
     + how to modify the similarity metrics
   * Difference to pure CBR:
     + the problem solver is complete (nonlinear means-ends analysis)
     + derivational analogy is domain independent
     + cases are replayed, not tweaked
     + case memory dynamically reconstructed due to similarity metrics

2. The Derivational Trace: Case Generation
   * NoLimit is a nonlinear means-ends analysis in backward chaining mode.
   * Choice points are needed for NoLimit:
     + what goal to subgoal
     + what operator to choose for the selected goal
     + what bindings to instantiate the chosen operator
     + whether to apply an operator or to continue subgoaling
     + which past choice point to backtrack upon failure
   * Justifications at the choices points can be:
     + user-given guidance
     + preprogrammed control knowledge
     + learned control rules
     + past cases used as guidance
   * At choice points, we also record the failed alternatives and their
     reasons.  The failures are:
     + No relevant operators
     + State loop
     + Goal loop
   * In the derivation trace, only three types of choices are relevant
     + goal choice
     + operator choice
     + application of operators
2.1 Justification structures and decision nodes
    * Goal node
    * Applied op node 
    * Chosen op node
2.2 An exmaple in a simple transportation domain
    * several items, one rocket, to go from A to B
    * Load, Unload, Move operators

3. The Derivational Replay: Case Utilization
   * syntax check: matching
   * semantic check: check for justifications
   * if semantic check fails:
     + replan
     + re-establish the failed condition
   * several cases can be used, in a recursive way
3.1 Pursuing the ONE-WAY-ROCKET example

4. Case Retrieval: The Similarity Metric
4.1 A direct similarity metric
    * consider predicate matching only (relations, from 0-ary to any)
    * consider matching between sets of literals (= pred. + arguments) in
      the start state and goal: argument objects of the type, and the same pred.
    * find a match such as the number of matched literals in the start
      state and goal is max
4.1.1 Examples in the process-job planning and extended-STRIPS domains
      * speedup: 1.5/2.0
4.2 The foot-print similarity metric
    * in the direct metric, some irrelevant literals in the start states
      are also matched
    * foot-print traces the relevant literals in the derivation trace, and
      only those literals contributing to the goal in start state are
      counted in matching
4.2.1 Further search-reduction examples
      * speedup: 2.0/2.6
4.3 Trading off retrieval and search costs
    * if solve a simple version of the problem first, then do derivational
      analogy, it could be faster than solving the complex one from sketch

5. Case Storage: Interpreting the Utility of Guidance Provided
   * fully-used: generalize
   * extension: differentiate
   * locally/globally divergent: specialize or split

6. Conclusion

Appendix: The PRODIGY Architecture

A.1 The problem solver
A.2 The learning modules

Summary author's notes:


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Fri Mar 17 11:33:41 EST 2000