Skip to main content

Weighted Nonlinear Line Attractor for Complex Manifold Learning

  • Conference paper
  • First Online:
Computational Intelligence (IJCCI 2015)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 669))

Included in the following conference series:

  • 1475 Accesses

Abstract

An artificial neural network is modeled by weighting between different neurons to form synaptic connections. The nonlinear line attractor (NLA) models the weighting architecture by a polynomial weight set, which provides stronger connections between neurons. With the connections between neurons, we desired neuron weighting based on proximity using a Gaussian weighting strategy of the neurons that should reduce computational times significantly. Instead of using proximity to the neurons, it is found that utilizing the error found from estimating the output neurons to weight the connections between the neurons would provide the best results. The polynomial weights that are trained into the neural network will be reduced using a nonlinear dimensionality reduction which preserves the locality of the weights, since the weights are Gaussian weighted. A distance measure is then used to compare the test and training data. From testing the algorithm, it is observed that the proposed weighted NLA algorithm provides better recognition than both the GNLA algorithm and the original NLA algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. sci. 79, 2554–2558 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  2. Lewis, J.E., Glass, L.: Steady states, limit cycles, and chaos in models of complex biological networks. Int. J. Bifurcat. Chaos 1, 477–483 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  3. Zhang, K.: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci. 16, 2112–2126 (1996)

    Article  Google Scholar 

  4. Tononi, G., Sporns, O., Edelman, G.M.: Measures of degeneracy and redundancy in biological networks. Proc. Nat. Acad. Sci. 96, 3257–3262 (1999)

    Article  Google Scholar 

  5. Guido, W., Spear, P., Tong, L.: Functional compensation in the lateral suprasylvian visual area following bilateral visual cortex damage in kittens. Exp. Brain Res. 83, 219–224 (1990)

    Article  Google Scholar 

  6. Happel, B.L., Murre, J.M.: Design and evolution of modular neural network architectures. Neural Netw. 7, 985–1004 (1994)

    Article  Google Scholar 

  7. Gottumukkal, R., Asari, V.K.: An improved face recognition technique based on modular PCA approach. Pattern Recogn. Lett. 25, 429–436 (2004)

    Article  Google Scholar 

  8. Gomi, H., Kawato, M.: Recognition of manipulated objects by motor learning with modular architecture networks. Neural Netw. 6, 485–497 (1993)

    Article  Google Scholar 

  9. Auda, G., Kamel, M.: Cmnn: cooperative modular neural networks for pattern recognition. Pattern Recogn. Lett. 18, 1391–1398 (1997)

    Article  MATH  Google Scholar 

  10. Seow, M.-J., Asari, V.K.: Recurrent network as a nonlinear line attractor for skin color association. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, pp. 870–875. Springer, Heidelberg (2004). doi:10.1007/978-3-540-28647-9_143

    Chapter  Google Scholar 

  11. Seow, M.J., Asari, V.K.: Recurrent neural network as a linear attractor for pattern association. IEEE Trans. Neural Netw. 17, 246–250 (2006)

    Article  Google Scholar 

  12. Seow, M.J., Alex, A.T., Asari, V.K.: Learning embedded lines of attraction by self organization for pose and expression invariant face recognition. Opt. Eng. 51, 107201 (2012)

    Article  Google Scholar 

  13. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. Numer. Math. 14, 403–420 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  14. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theus H. Aspiras .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Aspiras, T.H., Asari, V.K., Sakla, W. (2017). Weighted Nonlinear Line Attractor for Complex Manifold Learning. In: Merelo, J.J., et al. Computational Intelligence. IJCCI 2015. Studies in Computational Intelligence, vol 669. Springer, Cham. https://doi.org/10.1007/978-3-319-48506-5_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-48506-5_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-48504-1

  • Online ISBN: 978-3-319-48506-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics