Skip to main content
Log in

Robust Support Vector Regression in Primal with Asymmetric Huber Loss

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

As real world data sets in general contain noise, construction of robust regression learning models to fit data with noise is an important and challenging research problem. It is all the more difficult to learn regression function with good generalization performance for input samples corrupted by asymmetric noise and outliers. In this work, we propose novel robust regularized support vector regression models with asymmetric Huber and ε-insensitive Huber loss functions leading to strongly convex minimization problems in simpler form whose solutions are obtained by simple functional iterative method. Numerical experiments performed on (1) synthetic data sets with different noise models and having outliers; (2) real world data sets, clearly show the effectiveness and applicability of the proposed support vector regression models with asymmetric Huber loss.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Expert Syst Appl 37:8784–8792

    Article  Google Scholar 

  2. Box GEP, Jenkins GM (1976) Time series analysis: forecasting and control. Holden-Day, San Francisco

    MATH  Google Scholar 

  3. Camps-Valls G, Bruzzone L, Rojo-Alvarez JL (2006) Robust support vector regression for biophysical variable estimation from remotely sensed images. IEEE Geosci Remote Sens Lett 3(3):339–343

    Article  Google Scholar 

  4. Chapelle O (2007) Training a support vector machine in the primal. Neural Comput 19(5):1155–1178

    Article  MathSciNet  MATH  Google Scholar 

  5. Chen C, Yan C, Guo B, Liu G (2017) A robust algorithm of support vector regression with a trimmed Huber loss function in the primal. Soft Comput 21(18):5235–5243

    Article  Google Scholar 

  6. Chu W, Keerthi SS, Ong CJ (2004) Baysian support vector regression using a unified loss function. IEEE Trans Neural Netw 15(1):29–44

    Article  Google Scholar 

  7. Chuang CC, Lee ZJ (2011) Hybrid robust support vector machines for regression with outliers. Appl Soft Comput 11:64–72

    Article  Google Scholar 

  8. Chuang CC, Su SF, Jeng JT, Hsiao CC (2002) Robust support vector regression networks for function approximation with outliers. IEEE Trans. Neural Netw. 13(6):1322–1330

    Article  Google Scholar 

  9. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning method. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  10. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  11. Gretton A, Doucet A, Herbrich R, Rayner PJW, Scholkopf B (2001) Support vector regression for black-box system identification. In: Proceedings of the 11th IEEE workshop on statistical signal processing

  12. Guitton A, Symes WW (2003) Robust inversion of seismic data using the Huber norm. Geophysics 68(4):1310–1319

    Article  Google Scholar 

  13. Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422

    Article  MATH  Google Scholar 

  14. Hao P-Y (2017) Pairing support vector algorithm for data regression. Neurocomputing 225:174–187

    Article  Google Scholar 

  15. Huang X, Shi L, Pelckmans K, Suykens JAK (2014) Asymmetric v-tube support vector regression. Comput Stat Data Anal 77:371–382

    Article  MathSciNet  MATH  Google Scholar 

  16. Huang X, Shi L, Suykens JAK (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997

    Article  Google Scholar 

  17. Huang X, Shi L, Suykens JAK (2014) Asymmetric least squares support vector machine classifiers. Comput Stat Data Anal 77:371–382

    Article  MathSciNet  MATH  Google Scholar 

  18. Huber PJ, Ronchetti EM (2009) Robust statistics, 2nd edn. Wiley, New York

    Book  MATH  Google Scholar 

  19. Hubert M, Rousseeuw P, Verdonck T (2009) Robust PCA for skewed data and its outlier map. Comput Stat Data Anal 51:2264–2274

    Article  MathSciNet  MATH  Google Scholar 

  20. Kassam SA, Moustakides G, Shin JG (1982) Robust detection of known signals in asymmetric noise. IEEE Trans Inf Theory IT 28(1):84–91

    Article  MathSciNet  MATH  Google Scholar 

  21. Lin CF, Wang SD (2002) Fuzzy support vector machine. IEEE Trans Neural Netw 13(2):464–471

    Article  Google Scholar 

  22. Liu M, Xu C, Xu C, Tao D (2017) Fast SVM trained by divide-and-conquer anchors. In: Proceedings 26th international conference on artificial intelligence (IJCAI-17), pp 2322–2328

  23. Madsen K, Nielsen HB (1990) Finite algorithms for robust linear regression. BIT 30:682–699

    Article  MathSciNet  MATH  Google Scholar 

  24. Mangasarian OL, Musicant D (2000) Robust linear and support vector regression. IEEE Trans Pattern Anal Mach Intell 22(9):950–955

    Article  Google Scholar 

  25. Min JE, Lee YC (2005) Bankruptcy prediction using optimal choice of kernel function parameters. Expert Syst Appl 28(4):603–614

    Article  Google Scholar 

  26. Osuna F, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Proceediings of the computer vision and pattern recognition, pp 130–136

  27. Peng X, Xu D, Shen J (2014) A twin projection support vector machine for data regression. Neurocomputing 138:131–141

    Article  Google Scholar 

  28. Scholkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12(5):1207–1245

    Article  Google Scholar 

  29. Sjoberg J, Zhang Q, Ljung L, Berveniste A, Delyon B, Glorennec P, Hjalmarsson H, Juditsky A (1995) Nonlinear black-box modeling in system identification: a unified overview. Automatica 31:1691–1724

    Article  MathSciNet  MATH  Google Scholar 

  30. Smola AJ (1998) Regression estimation with support vector learning machines, Master’s thesis. Technical Univ, Munchen, Munich, Germany

    Google Scholar 

  31. Suykens JAK, De Brabanter J, Lukas L, Vandewalle J (2002) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48(1):85–105

    Article  MATH  Google Scholar 

  32. Suykens JAK, Gestel Van, De Brabanter J, De Moor B, Vandewalle J (2002) Least squares support vector machines. World Scientific, Singapore

    Book  MATH  Google Scholar 

  33. Takeuchi I, Bengio Y, Kanamori T (2001) Robust regression with asymmetric heavy-tail noise distributions, Technical Report 1198, Universite de Montreal

  34. Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

  35. Wang X, Tan L, He L (2014) A robust least squares support vector machine for regression and classification with noise. Neurocomputing 140:41–52

    Article  Google Scholar 

  36. Ye YF, Bai L, Hua XY, Shao YH, Wang Z, Deng NY (2016) Weighted Lagrange ε-twin support vector regression. Neurocomputing 197:53–68

    Article  Google Scholar 

  37. You S, Xu C, Wang Y, Xu C and Tao D (2017) Privileged multi-label learning. In: Proceedings of the 26th international conference on artificial intelligence (IJCAI-17), pp 3336–3342

  38. Zhang XG (1999) Using class-centre vectors to build support vector machines. In: Proceedings of the IEEE signal processing Society workshop, New York, IEEE Press, pp 3–11

  39. Zhao Y, Sun J (2008) Robust support vector regression in the primal. Neural Networks 21:1548–1555

    Article  MATH  Google Scholar 

  40. Zhu J, Hoi SCH, Lyu MRT (2008) Robust regularized kernel regression. IEEE Trans Syst Man Cybern Part B Cybern 38(6):1639–1644

    Article  Google Scholar 

Download references

Acknowledgements

The authors are extremely thankful to the anonymous reviewers for their constructive comments. Mr.Yogendra Meena acknowledges the financial assistance awarded by Rajiv Gandhi National Fellowship, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Balasundaram.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balasundaram, S., Meena, Y. Robust Support Vector Regression in Primal with Asymmetric Huber Loss. Neural Process Lett 49, 1399–1431 (2019). https://doi.org/10.1007/s11063-018-9875-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9875-8

Keywords

Navigation