Advertisement

Regression with Support Vector Machines and VGG Neural Networks

  • Su Wu
  • Chang Liu
  • Ziheng Wang
  • Shaozhi WuEmail author
  • Kai Xiao
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 921)

Abstract

In the area of machine learning, classification tasks have been well studied, while another important application of regression is not of the same level. In this paper, we propose parameterization and settings obtained from multiple experiments for traditional supervised machine learning of Support Vector Machine (SVM) and recently widely used deep unsupervised learning technology of Convolutional Neural Networks (CNN) based on Visual Geometry Group (VGGNet). In this study, different dataset used for regression task have been adopted. We have experimented on six data types obtained from the UCI Machine Learning Repository, and one converted handwritten image dataset from the MNIST. Accuracy of the regression results generated by the proposed models are validated with statistical methods of Mean Absolute Error (MAE) and R_square, i.e. coefficient of determination. Experimental results demonstrate that VGG has clear advantages over SVM in the cases of image recognition and attributes with strong correlation, and SVM performs better in the cases of discrete, irregular and weak correlation data than. By comparing the three kernel functions of SVM, it is found that in most cases, Rbt kernel function performs more effectively than Linear and Poly ones.

Keywords

Regression VGG SVM R_square Kernel function Supervised and unsupervised learning 

References

  1. 1.
    Clancey, W.J.: Classification Problem Solving. Stanford University, Stanford (1984)Google Scholar
  2. 2.
    Kotsiantis, S.B., Zaharakis, I., Pintelas, P.: Supervised machine learning: a review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 160, 3–24 (2017)Google Scholar
  3. 3.
    Joachims, T.: Text categorization with support vector machines: learning with many relevant features. In: European Conference on Machine Learning, pp. 137–142. Springer, Heidelberg (1998)Google Scholar
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  5. 5.
    Meehl, P.E.: Bootstraps taxometrics: solving the classification problem in psychopathology. Am. Psychol. 50(4), 266–275 (1995)CrossRefGoogle Scholar
  6. 6.
    LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code. Neural Comput. 1(4), 541–551 (1998)CrossRefGoogle Scholar
  7. 7.
    Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition (2015). arXiv preprint: arXiv:1409.1556v6
  8. 8.
    UCI machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html
  9. 9.
    The MNIST Database of Handwritten Digits. http://yann.lecun.com/exdb/mnist/
  10. 10.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  11. 11.
    Tsanas, A., Xifara, A.: Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy Build. 49, 560–567 (2012)CrossRefGoogle Scholar
  12. 12.
    Yeh, I.-C.: Modeling of strength of high performance concrete using artificial neural networks. Cem. Concr. Res. 28(12), 1797–1808 (1998)CrossRefGoogle Scholar
  13. 13.
    Candanedo, L.M., Feldheim, V., Deramaix, D.: Data driven prediction models of energy use of appliances in a low-energy house. Energy Build. 140, 81–97 (2017). ISSN 0378-7788CrossRefGoogle Scholar
  14. 14.
    Liang, X., Zou, T., Guo, B., Li, S., Zhang, H., Zhang, S., Huang, H., Chen, S.X.: Assessing Beijing’s PM2.5 pollution: severity, weather impact, APEC and winter heating. Proc. R. Soc. A 471, 20150257 (2015)CrossRefGoogle Scholar
  15. 15.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Zhou, F., Claire, Q., King, R.D.: Predicting the geographical origin of music. In: IEEE ICDM, pp. 1115–1120 (2014)Google Scholar
  17. 17.
    Tsanas, A., Little, M.A., McSharry, P.E., Ramig, L.O.: Accurate telemonitoring of Parkinson’s disease progression by non-invasive speech tests. IEEE Trans. Biomed. Eng. 57(4), 884–893 (2010)CrossRefGoogle Scholar
  18. 18.
    Little, M.A., McSharry, P.E., Hunter, E.J., Spielman, J., Ramig, L.O.: Suitability of dysphonia measurements for telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 56(4), 1015–1022 (2009)CrossRefGoogle Scholar
  19. 19.
    Little, M.A., McSharry, P.E., Roberts, S.J., Costello, D.A.E., Moroz, I.M.: Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed. Eng. OnLine. 6, 23 (2007)CrossRefGoogle Scholar
  20. 20.
    Frost, J.: Regression analysis: how do I interpret R-squared and assess the goodness-of-fit. The Minitab Blog (2013)Google Scholar
  21. 21.
    Gentleman, R., Carey, V.J.: Unsupervised machine learning. In: Bioconductor Case Studies, pp. 137–157. Springer, New York (2008)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Su Wu
    • 1
  • Chang Liu
    • 1
  • Ziheng Wang
    • 2
  • Shaozhi Wu
    • 3
    Email author
  • Kai Xiao
    • 4
  1. 1.School of Information and Software EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.School of Aerospace Engineering and Applied MechanicsTongji UniversityShanghaiChina
  3. 3.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina
  4. 4.School of Electronic Information and Electrical EngineeringShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations