Advertisement

Perceptrons

  • Ke-Lin DuEmail author
  • M. N. S. Swamy
Chapter

Abstract

This chapter introduces the simplest form of neural network—the perceptron. The perceptron has its historical position in the discipline of neural network and machine learning. One-neuron perceptron and single-layer perctron are described, together with various training methods.

References

  1. 1.
    Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.CrossRefGoogle Scholar
  2. 2.
    Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.MathSciNetCrossRefGoogle Scholar
  3. 3.
    Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: MIT Press.Google Scholar
  4. 4.
    Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.zbMATHCrossRefGoogle Scholar
  5. 5.
    Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.zbMATHCrossRefGoogle Scholar
  6. 6.
    Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.zbMATHCrossRefGoogle Scholar
  8. 8.
    Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.CrossRefGoogle Scholar
  9. 9.
    Chen, J. L., & Chang, J. Y. (2000). Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.CrossRefGoogle Scholar
  10. 10.
    Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.MathSciNetzbMATHGoogle Scholar
  11. 11.
    Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.CrossRefGoogle Scholar
  12. 12.
    Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.CrossRefGoogle Scholar
  13. 13.
    Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.zbMATHGoogle Scholar
  14. 14.
    Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.CrossRefGoogle Scholar
  15. 15.
    Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.CrossRefGoogle Scholar
  16. 16.
    Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.zbMATHCrossRefGoogle Scholar
  17. 17.
    Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.CrossRefGoogle Scholar
  18. 18.
    Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.zbMATHCrossRefGoogle Scholar
  19. 19.
    Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.MathSciNetCrossRefGoogle Scholar
  20. 20.
    Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.MathSciNetzbMATHGoogle Scholar
  21. 21.
    Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.CrossRefGoogle Scholar
  22. 22.
    Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.CrossRefGoogle Scholar
  23. 23.
    Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.zbMATHCrossRefGoogle Scholar
  24. 24.
    Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.CrossRefGoogle Scholar
  25. 25.
    Ho, C. Y.-F., Ling, B. W.-K., & Iu, H. H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.CrossRefGoogle Scholar
  26. 26.
    Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.zbMATHGoogle Scholar
  27. 27.
    Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.MathSciNetCrossRefGoogle Scholar
  29. 29.
    Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.zbMATHCrossRefGoogle Scholar
  31. 31.
    Mansfield, A. J. (1991). Training perceptrons by linear programming. NPL Report DITC 181/91, National Physical Laboratory, Teddington, Middlesex, UK.Google Scholar
  32. 32.
    Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.zbMATHCrossRefGoogle Scholar
  33. 33.
    Mays, C. H. (1963). Adaptive threshold logic. PhD thesis, Stanford University.Google Scholar
  34. 34.
    Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.MathSciNetCrossRefGoogle Scholar
  35. 35.
    Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.CrossRefGoogle Scholar
  36. 36.
    Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.zbMATHCrossRefGoogle Scholar
  37. 37.
    Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.CrossRefGoogle Scholar
  38. 38.
    Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.CrossRefGoogle Scholar
  39. 39.
    Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.zbMATHGoogle Scholar
  40. 40.
    Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking Perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.CrossRefGoogle Scholar
  41. 41.
    Shalev-Shwartz, S. & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In: Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278).Google Scholar
  42. 42.
    Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.zbMATHCrossRefGoogle Scholar
  43. 43.
    Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: learning and generalisation. Europhysics Letters, 8(8), 747–751.CrossRefGoogle Scholar
  44. 44.
    Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRefGoogle Scholar
  45. 45.
    Widrow, B. & Hoff, M. E. (1960). Adaptive switching circuits. In Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104).Google Scholar
  46. 46.
    Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringConcordia UniversityMontrealCanada
  2. 2.Xonlink Inc.HangzhouChina

Personalised recommendations