Abstract
This chapter introduces the simplest form of neural network—the perceptron. The perceptron has its historical position in the discipline of neural network and machine learning. One-neuron perceptron and single-layer perctron are described, together with various training methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.
Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.
Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: MIT Press.
Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.
Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.
Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.
Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.
Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.
Chen, J. L., & Chang, J. Y. (2000). Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.
Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.
Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.
Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.
Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.
Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.
Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.
Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.
Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.
Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.
Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.
Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.
Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.
Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.
Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.
Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.
Ho, C. Y.-F., Ling, B. W.-K., & Iu, H. H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.
Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.
Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.
Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.
Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.
Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.
Mansfield, A. J. (1991). Training perceptrons by linear programming. NPL Report DITC 181/91, National Physical Laboratory, Teddington, Middlesex, UK.
Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.
Mays, C. H. (1963). Adaptive threshold logic. PhD thesis, Stanford University.
Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.
Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.
Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.
Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.
Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.
Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.
Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking Perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.
Shalev-Shwartz, S. & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In: Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278).
Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.
Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: learning and generalisation. Europhysics Letters, 8(8), 747–751.
Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.
Widrow, B. & Hoff, M. E. (1960). Adaptive switching circuits. In Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104).
Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer-Verlag London Ltd., part of Springer Nature
About this chapter
Cite this chapter
Du, KL., Swamy, M.N.S. (2019). Perceptrons. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_4
Download citation
DOI: https://doi.org/10.1007/978-1-4471-7452-3_4
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-7451-6
Online ISBN: 978-1-4471-7452-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)