[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/

de Garis, H. (1990). Building Artificial Nervous Systems Using Genetically Programmed Neural Network Modules. Machine Learning: Proceedings of the Seventh International Converence, 132--139.

@InProceedings{deGaris1990,                 
  author =       {de Garis, Hugo},             
  title =        {Building Artificial Nervous Systems Using
  Genetically Programmed Neural Network Modules},             
  booktitle =    {Machine Learning: Proceedings of the Seventh
  International Conference},             
  crossref =     {},                
  key =          {},             
  pages =        {132--139},             
  year =         {1990},             
  editor =       {},             
  volume =       {},             
  number =       {},             
  series =       {},             
  address =      {},             
  month =        {},             
  organization = {},             
  publisher =    {},                
  note =         {},             
  annote =       {}              
}               

Author of the summary: Joel A. Pettis, 2006

Cite this paper for:

Genetic Programming uses the Genetic Algorithm (GA) to design neural network modules [de GARIS 1990]. The programmer specifies all of the characteristics and input parameters and the GA is used to find both the signs and the values of the weights of the network which provides the functionality desired. Once the weights are found, it is deemed a GenNet module and thus can be used as a component in a more complex structure.

GenNets are powerful enough to provide highly time dependent control [2]. This is shown using a simple pair of stick legs which are taught to walk :

In the first experiment, no constraint was imposed on the motion of the stick legs. The resulting motion was un-lifelike but the legs still managed to learn to move to the right.

In the second experiment, the hip joint was restricted to remain above the floor. The evolution was slower and almost as un-lifelike as the first experiment.

In the third experiment, a full set of constraints were imposed to ensure a lifelike walking motion. The result was that the legs tried to take the longest steps possible and in turn "did the splits" which ceased the evolution. This was a valuable lesson on the important concept steps. The quality measure was the product of the number of net positive steps taken to the right and the distance

  • The GenNet was then used in the third phase which used the distance covered as the quality measure in order to increase the step size. The result was a definite stepping motion with long strides

    The success of these experiments raises the prospect of building artificial nervous systems ("brain building"), i.e. combing functional and control GenNets to build simple brains.

    By combining the actions of many GenNets, it seems likely that it will be possible to build artificial nervous systems or "nanobrains."

    A concrete proposal is presented showing how GenNets can be combined to form a functioning artificial nervous system; however the implementation has not been completed.

    A lizard-like creature named LIZZY was chosen to illustrate this endeavour:

    The power and fun of GenNets is that one can evolve a motion without specifying in detail, how the motion is to be performed.

    Implementing many different GenNets makes one conscious of the need for Genetic Programming tools. Initially this could take the form of a software package, but in the age of VLSI one can readily imagine VLSI chips being designed to form the same task as the software package, but much faster. Garis calls this tool a Darwin Machine. [5]

    With suitable Darwin Machines, one can imagine teams of Genetic Programmers undertaking large scale projects to build much more sophisticated nervous systems with many thousands of GenNets. [5]

    There are some limitations in the Genetic Programming approach however. There are two types of limitations: intra GenNet and inter GenNet [6] lessons to be learned about intra GenNet limitations:

    Inter GenNet limitations are even more of a challenge. How would one know how to put together a machine with billions of GenNets? Garis wants to try 3 things:

    1. Give GenNets the capacity to learn
    2. Attempt to Genetically Program robots
    3. Attempt to get the Genetic Algorithm itself to connect the GenNets

    With these things in place, one can imagine populations of simulated "creatures" competing in an environment consisting of other creatures, all of which are "constructed" with the GA. The fitness measure would be truly Darwinian. [6]

    With nanotechnology, billions of nanorobots could function in parallel in a molecular environment, and report back to some central molecular processor, which determines the next generation. [6]

    Summary author's notes:


    Back to the Cognitive Science Summaries homepage
    Cognitive Science Summaries Webmaster:
    JimDavies (jim@jimdavies.org)
    Last modified: Fri Feb 10 09:04:20 EDT 2006