[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/


Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM 1984 pp1134-1142.

@Article{valiant1984,
  author = 	 "L. G. Valiant",
  title = 	 "A theory of the Learnable",
  journal = 	 "Communications of the ACM",
  year = 	 "1984",
  pages = 	 "1134-1142"
}

Author of the summary: Samuel R. Collins, 1999, scollins@cc.gatech.edu

Cite this paper for:

 

Summary:

Main idea: How can you trust induction? This paper shows the bounds on number of training examples needed for justifying high confidence in the learning ability. If the number of examples > the sample complexity (a metric created in this paper) then you can learn to distinguish the positives and negatives.

 
    1. EXAMPLES - Produces positive examples that exemplify the concept to be learned
    2. ORACLE - When presented with data, it well the learner whether or not the data positively exemplify the concept
 

 

A Learning Protocol for Boolean Functions:

    1. EXAMPLES: This has no inputs. It gives as output a vector v such that F(v)=1. For each such v, the probability that v is output on any single cell is D(v).
    2. ORACLE( ): On vector v, as input it outputs 1 or 0 according to whether F(v) = 1 or 0.

 

 

Learnability:

    1. The algorithm runs in time polynomial in an adjustable parameter h, in the various paramaters that quantify the size of the program to be learned, and in the number of variables t.
    2. For all vectors v, g(v) = 1 implies that f(v) = 1, and the sum of D(v) over all v such that f(v) = 1, but g(v) != 1 is at most h-1

 

Conclusions:

    1. Conjunctive normal form expressions with a bounded number of literals in each clause
    2. Monotone disjunctive normal form from expressions
    3. Arbitrary expressions in which each variable occurs just once

 

Summary author's notes:



Cognitive Science Summaries Webmaster:

JimDavies (jim@jimdavies.org)

Last modified: Thu Apr 15 11:07:19 EDT 1999