Advertisement

NLRFLA: A Supervised Learning Algorithm for the Development of Non-Linear Receptive Fields

  • S. L. Funk
  • I. Kumazawa
  • J. M. Kennedy
Conference paper

Abstract

The non-linear receptive field (NLRF) neural network consists of a homogeneous, uniformly distributed series of locally connected non-linear receptive fields. Each receptive field exploits a set of local connections, with weights which axe symmetrical around the center of the receptive field. The nonlinear behaviour is the result of three properties of the network. First, the activation is accumulated in the output layer units. Second, the recurrent feedback of activation from the output layer back onto itself. Third, the overlap of receptive fields. The nonlinear nature of the network allows it to perform relatively complex tasks, in spite of its simple architecture. The non-linear receptive field learning algorithm (NLRFLA) provides a way of finding the optimal set of connection weights for a given problem. The NLRFLA learning algorithm is essentially a recurrent backpropagation learning algorithm, with some special conditions.

Keywords

Receptive Field Input Pattern Connection Weight Supervise Learn Algorithm Local Connection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    L. B. Almeida. Backpropogation in perceptrons with feedback. Neural Computers, pages 199–208, 1988.Google Scholar
  2. [2]
    S. L. Funk, J. M. Kennedy, and I. Kumazawa. The nonlinear receptive field as a mechanism for the extraction of axes from outline drawings. In Proc. 3rd International Workshop on Visual Form, 1997. to appear.Google Scholar
  3. [3]
    S. L. Funk, I. Kumazawa, and J. M. Kennedy. The role of non-linear receptive fields in shift-invariant feature extraction. In Proceedings of the Meeting on Image Recognition and Understanding 96 (MIRU 96), 1996.Google Scholar
  4. [4]
    S. Grossberg. A solution of the figure-ground problem for biological vision. Neural Networks, 6:463–483, 1993.CrossRefGoogle Scholar
  5. [5]
    J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 81:3088–3092, 1984.Google Scholar
  6. [6]
    D. H. Hubel and T. N. Wiesel. Receptive fields of single neurones in the cat’s straite cortex. Journal of Physiology (London), 148, 1959.Google Scholar
  7. [7]
    J. M. Kennedy. Drawing and the Blind. Yale, New Haven, 1993.Google Scholar
  8. [8]
    J. M. Kennedy and S. L. Funk. Outline perception: Three theories of axis. In Proceedings of the International Conference on Visual Coding, 1995.Google Scholar
  9. [9]
    F. J. Pineda. Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59:2229–2232, 1987.MathSciNetCrossRefGoogle Scholar
  10. [10]
    D. Rumelhart, G. Hinton, and R. Williams. Learning Internal Representations by Error Prorogation. MIT Press, Cambridge, MA, 1986.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • S. L. Funk
    • 1
  • I. Kumazawa
    • 1
  • J. M. Kennedy
    • 2
  1. 1.Kumazawa Laboratory, Department of Computer ScienceTokyo Institute of TechnologyTokyo 152Japan
  2. 2.Department of PsychologyUniversity of Toronto, Scarborough CollegeUSA

Personalised recommendations