Towards Instructable Connectionist Systems
At least three disparate channels have been used to install new knowledge into artificial intelligence systems. The first of these is the programmer channel, through which the knowledge in the system is simply edited to include the desired new knowledge. While this method is often effective, it may not be as efficient as learning directly from environmental interaction. The second channel may be called the linguistic channel, through which knowledge is added by explicitly telling the system facts or commands encoded as strings of quasi-linguistic instructions in some appropriate form. Finally, there is, for want of a better phrase, the learning channel, through which the system learns new knowledge in an inductive way via environmental observations and simple feedback information. These latter two channels are the ones upon which we wish to focus, as they are the hallmarks of instructable systems. Most instructable systems depend upon, or at least heavily favor, one of these two channels for the bulk of their knowledge acquisition. Specifically, symbolic artificial intelligence systems have generally depended upon the explicit use of sentential logical expressions, rules, or productions for the transmission of new knowledge to the system. In contrast, many connectionist network models have relied solely on inductive generalization mechanisms for knowledge creation. There is no apparent reason to believe that this rough dichotomy of technique is necessary, however. Systems which are capable of receiving detailed instruction and also generalizing from experience the both possible and potentially very useful.
KeywordsHide Layer Processing Element Activation Space Instruction Sequence Instructable System
Unable to display preview. Download preview PDF.
- Paul M. Churchland. Talk presented at the Konnektionismus In Artificial Intelligence Und Kognitionsforschung Conference. September 1990. Salzburg, Austria.Google Scholar
- Garrison W. Cottrell, Brian Bartell, and Christopher Haupt. Grounding meaning in perception. In H. Marburger, editor, Proceedings of the 14th German Workshop on Artificial Intelligence, pages 307–321, Berlin, 1990. Springer Verlag.Google Scholar
- Lawrence Davis. Mapping classifier systems into neural networks. In David S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 49–56, San Mateo, 1989. Morgan Kaufmann.Google Scholar
- Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. In Steven Pinker and Jacques Mehler, editors, Connections and Symbols, pages 3–72. The MIT Press, Cambridge, 1988.Google Scholar
- Michael I. Jordan. Serial order: A parallel distributed processing approach. Technical report, Institute for Cognitive Science, UCSD, 1986.Google Scholar
- James L. McClelland, David E. Rumelhart, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Micro structure of Cognition, volume 2. The MIT Press, Cambridge, 1986.Google Scholar
- Janet Metcalfe. Recognition failure and the composite memory trace in CHARM. Psychological Review, 98:529–553,.Google Scholar
- Ryszard S. Michalski and George Tecuci, editors. Machine Learning: A Multistrategy Approach, volume 4. Morgan Kaufmann, San Mateo, 1993.Google Scholar
- David E. Rumelhart, James L. McClelland, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Micro structure of Cognition, volume 1. The MIT Press, Cambridge, 1986.Google Scholar
- Volker Tresp, Juergen Hollatz, and Subutai Ahmad. Network structuring and training using rule-based knowledge. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, San Mateo, 1993. Morgan Kaufmann.Google Scholar