Abstract
We report a series of experiments on connectionist learning that addresses a particularly pressing set of objections to the plausibility of connectionist learning as a model of human learning. Connectionist models have typically suffered from rather severe problems of inadequate generalization (where generalizations are significantly fewer than training inputs) and interference of newly learned items with previously learned items. Taking a cue from the domains in which human learning dramatically overcomes such problems, we see that indeed connectionist learning can escape these problems in combinatorially structured domains. In the simple combinatorial domain of letter sequences, we find that a basic connectionist learning model trained on 50 6-letter sequences can correctly generalize to about 10,000 novel sequences. We also discover that the model exhibits over 1,000,000 virtual memories:new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. We conclude that connectionist learning is not as harmful to the empiricist position as previously reported experiments might suggest.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Smolensky, P. (1987). Connectionist AI, symbolic AI, and the brain. AI Review, 1, 95–109.
Smolensky, P. (1988). On the proper treatment of connectionism. The Behavioral and Brain Sciences. 11, 1–23.
Smolensky, P. (1988). Putting together connectionism—again. The Behavioral and Brain Sciences. 11, 59–74.
Smolensky, P. (1986). Neural and conceptual interpretations of parallel distributed processing models. In J. L. McClelland, D. E. Rumelhart, & the PDP Research Group, Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models. Cambridge, MA: MIT Press/Bradford Books.
Smolensky, P. (1983). Schema selection and stochastic inference in modular environments. Proceedings of the National Conference on Artificial Intelligence. Washington, DC.
Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. In D. F. Rumelhart, J. L. McClelland, & the PDP Research Group, Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge, MA: MIT Press/Bradford Books.
Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986). Schemata and sequential thought processes in parallel distributed processing. In J. L. McClelland, D. E. Rumelhart, & the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models. Cambridge, MA: MIT Press/Bradford Books.
McMillan, C. & Smolensky, P. (1988). Analyzing a connectionist model as a system of soft rules. Proceedings of the Tenth Annual Meeting of the Cognitive Science Society. Montreal, Canada. August.
Smolensky, P. (1987). On variable binding and the representation of symbolic structures in connectionist systems. Technical Report CU-CS-355-87. Department of Computer Science, University of Colorado at Boulder.
Smolensky, P. (1987d). The constituent structure of connectionist mental states: A reply to Fodor and Pylyshyn. Southern Journal of Philosophy, 26 (Supplement), 137–163.
Dolan, C. & Smolensky, P. (1988). Implementing a connectionist production system using tensor products. In D. Touretzky, G. E. Hinton, & T. J. Sejnowski (Eds.), Proceedings of the Connectionist Models Summer School, 1988. Morgan Kaufmann.
Smolensky, P. (1989). Connectionism, constituency, and the language of thought. In B. Loewer & G. Rey (Eds.), Fodor and his critics. Blackwell’s.
Smolensky, P. (forthcoming). Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artificial Intelligence.
Smolensky, P. (forthcoming). Lectures on connectionist cognitive modeling. Hillsdale, NJ: Erlbaum.
McCloskey, M., & Cohen N.J. (1988). Catastrophic interference in connectionist networks: The sequential learning problem. To appear in G. H. Bower (Ed.), The Psychology of learning and motivation: Volume 23.
Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group, Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge, MA: MIT Press/Bradford Books.
Smolensky, P., Brousse, O., & Mozer, M. (forthcoming). Exponential growth of generalizations and virtual memories in connectionist combinatorial learning. To be submitted to Cognitive Science.
McClelland, J.L. & Rumelhart, D.E. (1988). Explorations in Parallel Distributed Processing: A handbook of models, programs, and exercises. Cambridge, MA: MIT Press/Bradford Books.
Brousse, O. & P. Smolensky. (1989). Virtual memories and massive generalization in connectionist combinatorial learning. Cognitive Science Society. Ann Arbor, MI. August.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin, Heidelberg
About this paper
Cite this paper
Brousse, O., Smolensky, P. (1990). Connectionist Generalization and Incremental Learning in Combinatorial Domains. In: Haken, H., Stadler, M. (eds) Synergetics of Cognition. Springer Series in Synergetics, vol 45. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-48779-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-642-48779-8_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-48781-1
Online ISBN: 978-3-642-48779-8
eBook Packages: Springer Book Archive