Skip to main content

On the Stability and Bias-Variance Analysis of Kernel Matrix Learning

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4509))

  • 987 Accesses

Abstract

Stability and bias-variance analysis are two powerful tools to understand learning algorithms better. We use these tools to analyze learning the kernel matrix (LKM) algorithm. The motivation comes from: (i) LKM works in the transductive setting where both training and test data points are to be given apriori. Hence, it is worth knowing the stability of LKM under small variations in the data set and (ii) It has been argued that LKMs overfit the given data set. In particular we are interested in answering the following questions: (a) Is LKM a stable algorithm? (b) do they overfit (c) what is the bias behavior with different optimal kernels?. Our experimental results show that LKMs do not overfit the given data set. The stability analysis reveals that LKMs are unstable algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting and variants. Machine Learning 36, 105–142 (1999)

    Article  Google Scholar 

  2. Buciu, I., Kotropoulos, C., Pitas, I.: Demonstrating the stability of support vector machines for classification. Signal Processing 86(9), 2364 (2006)

    Article  Google Scholar 

  3. Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 955 (1998)

    Article  Google Scholar 

  4. Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.: Choosing multiple parameters for support vector machines. Machine Learning 46(1-3), 131–159 (2001)

    Google Scholar 

  5. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  6. Sturm, J.F.: Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11-12, 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  7. Lanckriet, G.R.G., Cristianini, N., Bartlett, P., El Ghaoui, L., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27–72 (2004)

    Google Scholar 

  8. Ong, C.S., Smola, A.J., Williamson, R.C.: Learning the kernel with hyperkernels. Journal of Machine Learning Research 6, 1043–1071 (2005)

    MathSciNet  Google Scholar 

  9. Rätsch, G.: Benchmark repository. Technical report, Intelligent Data Analysis Group, Fraunhofer-FIRST (2005)

    Google Scholar 

  10. Valentini, G., Dietterich, T.G.: Bias-variance analysis of support vector machines for the development of svm-based ensemble methods. Journal of Machine Learning Research 5, 725–775 (2004)

    MathSciNet  Google Scholar 

  11. Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)

    MATH  Google Scholar 

  12. Vapnik, V., Chapelle, O.: Bounds on error expectation for SVM. Neural Computation 12, 2013–2036 (2000)

    Article  Google Scholar 

  13. Zhang, T.: Leave-one-out bounds for kernel methods. Neural Computation 15, 1397–1437 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ziad Kobti Dan Wu

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Saradhi, V.V., Karnick, H. (2007). On the Stability and Bias-Variance Analysis of Kernel Matrix Learning. In: Kobti, Z., Wu, D. (eds) Advances in Artificial Intelligence. Canadian AI 2007. Lecture Notes in Computer Science(), vol 4509. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72665-4_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72665-4_38

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72664-7

  • Online ISBN: 978-3-540-72665-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics