Skip to main content

Relaxation of Hard Classification Targets for LSE Minimization

  • Conference paper
Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR 2005)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 3757))

  • 1933 Accesses

Abstract

In the spirit of stabilizing a solution to handle possible over-fitting of data which is especially common for high order models, we propose a relaxed target training method for regression models which are linear in parameters. This relaxation of training target from the conventional binary values to disjoint classification spaces provides good classification fidelity according to a threshold treatment during the decision process. A particular design to relax the training target is provided under practical consideration. Extension to multiple class problems is formulated before the method is applied to a plug-in full multivariate polynomial model and a reduced model on synthetic data sets to illustrate the idea. Additional experiments were performed using real-world data from the UCI[1] data repository to derive certain empirical evidence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases, University of California, Irvine, Dept. of Information and Computer Sciences (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  2. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, Inc., New York (2001)

    MATH  Google Scholar 

  3. Schürmann, J.: Pattern Classification: A Unified View of Statistical and Neural Approaches. John Wiley & Sons, Inc., New York (1996)

    Google Scholar 

  4. Poggio, T., Rifkin, R., Mukherjee, S., Niyogi, P.: General Conditions for Predictivity in Learning Theory. Nature 428, 419–422 (2004)

    Article  Google Scholar 

  5. Baram, Y.: Soft Nearest Neighbor Classification. In: International Conference on Neural Networks (ICNN), vol. 3, pp. 1469–1473 (1997)

    Google Scholar 

  6. Baram, Y.: Partial Classification: The Benefit of Deferred Decision. IEEE Trans. Pattern Analysis and Machine Intelligence 20(8), 769–776 (1998)

    Article  MathSciNet  Google Scholar 

  7. Toh, K.-A., Yau, W.-Y., Jiang, X.: A Reduced Multivariate Polynomial Model For Multimodal Biometrics And Classifiers Fusion. IEEE Trans. Circuits and Systems for Video Technology (Special Issue on Image- and Video-Based Biometrics) 14(2), 224–233 (2004)

    Google Scholar 

  8. Toh, K.-A., Tran, Q.-L., Srinivasan, D.: Benchmarking A Reduced Multivariate Polynomial Pattern Classifier. IEEE Trans. Pattern Analysis and Machine Intelligence 26(6), 740–755 (2004)

    Article  Google Scholar 

  9. Tipping, M.E.: Sparse Bayesian Learning and the Relevance Vector machine. Journal of Machine Learning Research 1, 211–244 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  10. Tipping, M.E.: The Relevance Vector machine. In: Solla, S.A., Leen, T.K., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems, vol. 12, pp. 652–658 (2000)

    Google Scholar 

  11. Figueiredo, M.A.T.: Adaptive Sparseness for Supervised Learning. IEEE Trans. Pattern Analysis and Machine Intelligence 25(9), 1150–1159 (2003)

    Article  Google Scholar 

  12. The MathWorks, Matlab And Simulink (2003), http://www.mathworks.com/

  13. Ma, J., Zhao, Y., Ahalt, S.: OSU SVM Classifier Matlab Toolbox (ver 3.00), The Ohio State University (2002), http://eewww.eng.ohio-state.edu/~maj/osu_svm/

  14. Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine, Microsoft Research (2004), http://research.microsoft.com/mlp/RVM

  15. Vapnik, V.N.: Statistical Learning Theory. Wiley-Interscience Pub., Hoboken (1998)

    MATH  Google Scholar 

  16. Soares, C., Brazdil, P.B., Kuba, P.: A meta-learning method to select the kernel width in support vector regression. Machine Learning 54(3), 195–209 (2004)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Toh, KA., Jiang, X., Yau, WY. (2005). Relaxation of Hard Classification Targets for LSE Minimization. In: Rangarajan, A., Vemuri, B., Yuille, A.L. (eds) Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2005. Lecture Notes in Computer Science, vol 3757. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11585978_13

Download citation

  • DOI: https://doi.org/10.1007/11585978_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30287-2

  • Online ISBN: 978-3-540-32098-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics