I propose a cognitive theory as a model of the use of visual representations in analogical problem solving.
Participants have mentioned that the description of the roads in the fortress problem ``radiating like the spokes of a wheel'' was helpful in solving the tumor problem (Gick & Holyoak 1980). Our theory explains why this is the case: The fortress scene, from above, looks like rays hitting a tumor, facilitating several steps of analogy. Other accounts of the fortress/tumor problem cannot explain these data.
My theory shows that visual knowledge alone, with no amodal knowledge, is sufficient for enabling analogical transfer. This supports the central hypothesis of my work. My theory suggests a computational model of analogy based on dynamic visual knowledge that complements traditional models based on amodal knowledge.
Although Galatea does not yet address the issues of retrieval and mapping, put together with other work described in the previous section, we can now more confidently conjecture that visual knowledge alone can enable retrieval, mapping and transfer in analogy.
My theory represents visual knowledge symbolically, in the form of symbolic images made of visual elements and transformations. The symbolic representation provides the standard benefits of discreteness, abstraction, ordering, and composition. Although sequences of lower-level bitmap representations also capture the notion of ordering, they, by themselves, neither capture abstractions that enable noticing visual similarity nor enable transformations on the images. My theory provides additional evidence that symbolic representations of visual images are necessary for analogy. This is finding is important because visual reasoning is often thought to be a sub-symbolic process, but if my theory is correct, in analogical transfer even visual reasoning is symbolic.
I will expand on my previous work in this dissertation. I will further flesh out the role of visual information in analogical problem solving by creating a theory and implemented agent that demonstrates how problems in non-visual representations can be aided by visual instantiation and reasoning visually. The fact that the visual and non-visual systems will use the same analogical problem solving machinery will in some sense control for differences in processing.
I will show more evidence that visual information is useful for problem solving, that it is especially useful when there are symbolic mismatches for entities and manipulations at the non-visual level, and flesh out a visual language, as well as a theory of visual instantiation that will show how to get a visual representation from a non-visual one. I will show that my theory works for a number of examples, two of which have data from multiple experimental participants. I will evaluate my theory by explaining these data, making psychological predictions, comparing it to non-visual theories, implementing the theory in a program, and experimenting with that program.
The expected results of this proposed work follow. I will have developed a language of visual primitives for representing physical domains, and processes for using them to analogically solve problems. The community will have a better idea of the conditions under which visual representations are useful for solving problems, especially in contrast with SBF non-visual representations.