[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/

Hampson, P. J. & Morris, P. E. (1996) Chapter 13: Connectionism. In Understanding Cognition, Blackwell, Cambridge MA.


Author of the summary: Debajyoti Pati, 2000, gte811q@prism.gatech.edu

The approaches adopted by neuropsychology and artificial intelligence towards modeling mental functions are different. Neuropsychology deals with studying the link between the mind and the brain. In artificial intelligence brain is considered as a general purpose symbol processor, and thus separate the study of the mind from that of the brain. The mind is studied without worrying about the structure of the brain. Connectionism is an approach which blends certain aspects of neuropsychology with those of artificial intelligence, and offers a new approach to explaining human cognitive activities. While its primary aim is to simulate normal cognitive activities, it also recognizes the importance of neural mechanism in supporting mental life as well as contributing to its shape. Connectionism, unlike the symbolic paradigm, does not use rules or symbols to explain mind. It depends on the combined activity of numerous simple, interlinked, neuron-like processing units. The number, pattern and the strength of the links are the primary variables in explaining various mental activities.

Connectionism, although not a new concept, got its strength from several neurologically plausible models of cognition which appeared in psychology and computer science during the 40s and 50s. Hebb (1949) propounded the reverberating circuits idea which influenced theories of memory consolidation. He argued that learning involves strengthening of the connections between elements of the neural network, and modified versions of Hebb"s idea have, since, been incorporated into the general theory of connectionism. In this modified version, learning involves the changing of strength between a large number of simple processing units. Lashley (1950) proposed that separate memories do not occupy separate locations in the brain. This formed one of the crucial ideas in the modern connectionist theories, where representations are distributed instead of being localized. Similarly, Rosenblatt"s (1962) theory of neurodynamics gave further momentum to the connectionist movement, where he described a class of simple parallel processing learning mechanisms called the "perceptrons".

Connectionism, however, remained on the sidelines until the 70s and 80s, when powerful arguments regarding parallel processing in the brain was revived. Development of computer hardware and several robust connectionist models were also crucial to the rejuvenation. McClelland"s (1981) memory model was one of the recent models incorporating parallel processing and other important aspects of connectionism, where information about several general concepts and specific instances have been represented. The model is content addressable (access to information based on any attribute of the concept). The model has the ability to produce general properties from a set of particular instances, which is close to human skills in similar areas.

All connectionist models have several properties in common. They basically have a set of interconnected simple processing units, which are again of three types. There are input units, output units and hidden units. Input units accept the information either from the sensory channels or from other parts of the network. Output units put out the information. They either control the information directly or send those to other parts in the network. Hidden units communicate with the input and output units, but not with the external environment. Further, their behavior cannot be observed from outside the system. In a distributed model the units may represent a part feature, attribute or more abstract elements (but rarely the whole meaningful entities). When units interact the whole entity emerges. Most of the important works in a connectionist model occurs in the units and their connections, and there is no central control or executive monitoring the system (unlike in a symbolic system). Thus the processing and representation is spread across the entire network. All units have a activation value (a level of activity). This determines the degree of activation a unit passes on to its neighboring units. Any unit must reach a minimum level of activity before it produces an output. The connections between units could be data driven or concept driven. In data driven processing the information flow moves from relatively raw to more highly interpreted ones. The opposite is true for concept driven processing. Connection schemes thus rely on units arranged in several levels, and in a few cases interaction between the high and low level units is allowed. Another important concept in connectionism is the connection weight. Connection weights alter the size of the input into a receiving unit as well as change the type of input from excitatory to inhibitory or vice-versa. Inputs converging on a unit, thus, adds to, reduces or leave the activation value of the receiving unit unaffected. The final pattern of activation occurs when a dynamic equilibrium is reached across all the units. Connectionist models, like human cognitive processes, change continuously, while learning at the same time. When the overall pattern of activation alters, learning takes place. The alteration could occur through the removal of old connections, forming new connections or changing connection weights. Numerous learning rules have been developed to enable networks to modify their behavior. Further, connectionist models commonly use some standards (teachers) to measure the learning process. Pattern associators are some of the common models incorporating many of the connectionist principles. The essentially show how people might link pattrens together. They could link two sets of stimuli/ideas as well as two patterns (memory with memory or stimuli with response).


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Wed Apr 26 11:52:06 EDT 2000