Abstract
Structural risk minimisation (SRM) is a general complexity regularization method which automatically selects the model complexity that approximately minimises the misclassification error probability of the empirical risk minimiser. It does so by adding a complexity penalty term ε(m,k) to the empirical risk of the candidate hypotheses and then for any fixed sample size m it minimises the sum with respect to the model complexity variable k.
When learning multicategory classification there are M subsamples m i , corresponding to the M pattern classes with a priori probabilities p i , 1 ≤ i ≤ M. Using the usual representation for a multi-category classifier as M individual boolean classifiers, the penalty becomes \(\Sigma_{i=1}^{M}p_{i}\epsilon(m_{i},k_{i})\). If the m i are given then the standard SRM trivially applies here by minimizing the penalised empirical risk with respect to k i , 1...,M.
However, in situations where the total sample size \(\Sigma_{i=1}^{M}m_{i}\) needs to be minimal one needs to also minimize the penalised empirical risk with respect to the variables m i , i = 1...,M. The obvious problem is that the empirical risk can only be defined after the subsamples (and hence their sizes) are given (known).
Utilising an on-line stochastic gradient descent approach, this paper overcomes this difficulty and introduces a sample-querying algorithm which extends the standard SRM principle. It minimises the penalised empirical risk not only with respect to the k i , as the standard SRM does, but also with respect to the m i , i = 1...,M.
The challenge here is in defining a stochastic empirical criterion which when minimised yields a sequence of subsample-size vectors which asymptotically achieve the Bayes’ optimal error convergence rate.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, UK (1999)
Bartlett, P.L., Boucheron, S., Lugosi, G.: Model Selection and Error Estimation. Machine Learning 48(1–3), 85–113 (2002)
Devroye, L., Gyorfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, Heidelberg (1996)
Kultchinskii, V.: Rademacher Penalties and Structural Risk Minimization. IEEE Trans. on Info. Theory 47(5), 1902–1914 (2001)
Lugosi, G., Nobel, A.: Adaptive Model Selection Using Empirical Complexities. Annals of Statistics 27, 1830–1864 (1999)
Ratsaby, J.: Incremental Learning with Sample Queries. IEEE Trans. on PAMI 20(8), 883–888 (1998)
Ratsaby, J.: On Learning Multicategory Classification with Sample Queries. Information and Computation 185(2), 298–327 (2003)
Ratsaby, J., Meir, R., Maiorov, V.: Towards Robust Model Selection using Estimation and Approximation Error Bounds. In: Proc. 9th Annual Conference on Computational Learning Theory, p. 57. ACM, New York (1996)
Shawe-Taylor, J., Bartlett, P., Williamson, R., Anthony, M.: A Framework for Structural Risk Minimisation. NeuroCOLT Technical Report Series, NC-TR-96-032, Royal Holloway, University of London (1996)
Valiant, L.G.: A Theory of the learnable. Comm. ACM 27(11), 1134–1142 (1984)
Vapnik, V.N.: Estimation of Dependences Based on Empirical Data. Springer, Berlin (1982)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ratsaby, J. (2003). A Stochastic Gradient Descent Algorithm for Structural Risk Minimisation. In: Gavaldá, R., Jantke, K.P., Takimoto, E. (eds) Algorithmic Learning Theory. ALT 2003. Lecture Notes in Computer Science(), vol 2842. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39624-6_17
Download citation
DOI: https://doi.org/10.1007/978-3-540-39624-6_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20291-2
Online ISBN: 978-3-540-39624-6
eBook Packages: Springer Book Archive