Skip to main content

Perceptrons

  • Chapter
  • First Online:
Neural Networks and Statistical Learning

Abstract

This chapter introduces the simplest form of neural network—the perceptron. The perceptron has its historical position in the discipline of neural network and machine learning. One-neuron perceptron and single-layer perctron are described, together with various training methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.

    Article  Google Scholar 

  2. Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.

    Article  MathSciNet  Google Scholar 

  3. Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: MIT Press.

    Google Scholar 

  4. Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.

    Article  MATH  Google Scholar 

  5. Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.

    Article  MATH  Google Scholar 

  6. Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.

    Article  MathSciNet  MATH  Google Scholar 

  7. Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.

    Article  MATH  Google Scholar 

  8. Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.

    Article  Google Scholar 

  9. Chen, J. L., & Chang, J. Y. (2000). Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.

    Article  Google Scholar 

  10. Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.

    MathSciNet  MATH  Google Scholar 

  11. Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.

    Article  Google Scholar 

  12. Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.

    Article  Google Scholar 

  13. Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.

    MATH  Google Scholar 

  14. Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.

    Article  Google Scholar 

  15. Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.

    Article  Google Scholar 

  16. Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.

    Article  MATH  Google Scholar 

  17. Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.

    Article  Google Scholar 

  18. Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.

    Article  MATH  Google Scholar 

  19. Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.

    Article  MathSciNet  Google Scholar 

  20. Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.

    MathSciNet  MATH  Google Scholar 

  21. Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.

    Article  Google Scholar 

  22. Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.

    Article  Google Scholar 

  23. Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.

    Article  MATH  Google Scholar 

  24. Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.

    Article  Google Scholar 

  25. Ho, C. Y.-F., Ling, B. W.-K., & Iu, H. H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.

    Article  Google Scholar 

  26. Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.

    MATH  Google Scholar 

  27. Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.

    Article  MathSciNet  MATH  Google Scholar 

  28. Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.

    Article  MathSciNet  Google Scholar 

  29. Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.

    Article  MathSciNet  MATH  Google Scholar 

  30. Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.

    Article  MATH  Google Scholar 

  31. Mansfield, A. J. (1991). Training perceptrons by linear programming. NPL Report DITC 181/91, National Physical Laboratory, Teddington, Middlesex, UK.

    Google Scholar 

  32. Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.

    Article  MATH  Google Scholar 

  33. Mays, C. H. (1963). Adaptive threshold logic. PhD thesis, Stanford University.

    Google Scholar 

  34. Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.

    Article  MathSciNet  Google Scholar 

  35. Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.

    Article  Google Scholar 

  36. Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.

    Article  MATH  Google Scholar 

  37. Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.

    Article  Google Scholar 

  38. Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.

    Article  Google Scholar 

  39. Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.

    MATH  Google Scholar 

  40. Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking Perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.

    Article  Google Scholar 

  41. Shalev-Shwartz, S. & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In: Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278).

    Google Scholar 

  42. Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.

    Article  MATH  Google Scholar 

  43. Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: learning and generalisation. Europhysics Letters, 8(8), 747–751.

    Article  Google Scholar 

  44. Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.

    Article  Google Scholar 

  45. Widrow, B. & Hoff, M. E. (1960). Adaptive switching circuits. In Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104).

    Google Scholar 

  46. Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer-Verlag London Ltd., part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Du, KL., Swamy, M.N.S. (2019). Perceptrons. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_4

Download citation

Publish with us

Policies and ethics