Towards Instructable Connectionist Systems

  • David C. Noelle
  • Garrison W. Cottrell
Part of the The Springer International Series In Engineering and Computer Science book series (SECS, volume 292)


At least three disparate channels have been used to install new knowledge into artificial intelligence systems. The first of these is the programmer channel, through which the knowledge in the system is simply edited to include the desired new knowledge. While this method is often effective, it may not be as efficient as learning directly from environmental interaction. The second channel may be called the linguistic channel, through which knowledge is added by explicitly telling the system facts or commands encoded as strings of quasi-linguistic instructions in some appropriate form. Finally, there is, for want of a better phrase, the learning channel, through which the system learns new knowledge in an inductive way via environmental observations and simple feedback information. These latter two channels are the ones upon which we wish to focus, as they are the hallmarks of instructable systems. Most instructable systems depend upon, or at least heavily favor, one of these two channels for the bulk of their knowledge acquisition. Specifically, symbolic artificial intelligence systems have generally depended upon the explicit use of sentential logical expressions, rules, or productions for the transmission of new knowledge to the system. In contrast, many connectionist network models have relied solely on inductive generalization mechanisms for knowledge creation. There is no apparent reason to believe that this rough dichotomy of technique is necessary, however. Systems which are capable of receiving detailed instruction and also generalizing from experience the both possible and potentially very useful.


Hide Layer Processing Element Activation Space Instruction Sequence Instructable System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Paul M. Churchland. Talk presented at the Konnektionismus In Artificial Intelligence Und Kognitionsforschung Conference. September 1990. Salzburg, Austria.Google Scholar
  2. [2]
    Garrison W. Cottrell, Brian Bartell, and Christopher Haupt. Grounding meaning in perception. In H. Marburger, editor, Proceedings of the 14th German Workshop on Artificial Intelligence, pages 307–321, Berlin, 1990. Springer Verlag.Google Scholar
  3. [3]
    Garrison W. Cottrell and Fu-Sheng Tsung. Learning simple arithmetic procedures. Connection Science, 5(1):37–58,1993.CrossRefGoogle Scholar
  4. [4]
    Lawrence Davis. Mapping classifier systems into neural networks. In David S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 49–56, San Mateo, 1989. Morgan Kaufmann.Google Scholar
  5. [5]
    Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.CrossRefGoogle Scholar
  6. [6]
    Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. In Steven Pinker and Jacques Mehler, editors, Connections and Symbols, pages 3–72. The MIT Press, Cambridge, 1988.Google Scholar
  7. [7]
    Michael I. Jordan. Serial order: A parallel distributed processing approach. Technical report, Institute for Cognitive Science, UCSD, 1986.Google Scholar
  8. [8]
    James L. McClelland, David E. Rumelhart, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Micro structure of Cognition, volume 2. The MIT Press, Cambridge, 1986.Google Scholar
  9. [9]
    Janet Metcalfe. Recognition failure and the composite memory trace in CHARM. Psychological Review, 98:529–553,.Google Scholar
  10. [10]
    Ryszard S. Michalski and George Tecuci, editors. Machine Learning: A Multistrategy Approach, volume 4. Morgan Kaufmann, San Mateo, 1993.Google Scholar
  11. [11]
    David E. Rumelhart, James L. McClelland, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Micro structure of Cognition, volume 1. The MIT Press, Cambridge, 1986.Google Scholar
  12. [12]
    Mark F. St. John and James L. McClelland. Learning and applying contextual constraints in sentence comprehension. Artificial Intelligence, 46(l–2):217–257,1990.CrossRefGoogle Scholar
  13. [13]
    Volker Tresp, Juergen Hollatz, and Subutai Ahmad. Network structuring and training using rule-based knowledge. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, San Mateo, 1993. Morgan Kaufmann.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • David C. Noelle
    • 1
  • Garrison W. Cottrell
    • 1
  1. 1.Department of Computer Science and Engineering Institute for Neural ComputationUniversity of CaliforniaSan Diego La Jolla

Personalised recommendations