Abstract
In the area of machine learning, classification tasks have been well studied, while another important application of regression is not of the same level. In this paper, we propose parameterization and settings obtained from multiple experiments for traditional supervised machine learning of Support Vector Machine (SVM) and recently widely used deep unsupervised learning technology of Convolutional Neural Networks (CNN) based on Visual Geometry Group (VGGNet). In this study, different dataset used for regression task have been adopted. We have experimented on six data types obtained from the UCI Machine Learning Repository, and one converted handwritten image dataset from the MNIST. Accuracy of the regression results generated by the proposed models are validated with statistical methods of Mean Absolute Error (MAE) and R_square, i.e. coefficient of determination. Experimental results demonstrate that VGG has clear advantages over SVM in the cases of image recognition and attributes with strong correlation, and SVM performs better in the cases of discrete, irregular and weak correlation data than. By comparing the three kernel functions of SVM, it is found that in most cases, Rbt kernel function performs more effectively than Linear and Poly ones.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Clancey, W.J.: Classification Problem Solving. Stanford University, Stanford (1984)
Kotsiantis, S.B., Zaharakis, I., Pintelas, P.: Supervised machine learning: a review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 160, 3–24 (2017)
Joachims, T.: Text categorization with support vector machines: learning with many relevant features. In: European Conference on Machine Learning, pp. 137–142. Springer, Heidelberg (1998)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Meehl, P.E.: Bootstraps taxometrics: solving the classification problem in psychopathology. Am. Psychol. 50(4), 266–275 (1995)
LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code. Neural Comput. 1(4), 541–551 (1998)
Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition (2015). arXiv preprint: arXiv:1409.1556v6
UCI machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html
The MNIST Database of Handwritten Digits. http://yann.lecun.com/exdb/mnist/
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
Tsanas, A., Xifara, A.: Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy Build. 49, 560–567 (2012)
Yeh, I.-C.: Modeling of strength of high performance concrete using artificial neural networks. Cem. Concr. Res. 28(12), 1797–1808 (1998)
Candanedo, L.M., Feldheim, V., Deramaix, D.: Data driven prediction models of energy use of appliances in a low-energy house. Energy Build. 140, 81–97 (2017). ISSN 0378-7788
Liang, X., Zou, T., Guo, B., Li, S., Zhang, H., Zhang, S., Huang, H., Chen, S.X.: Assessing Beijing’s PM2.5 pollution: severity, weather impact, APEC and winter heating. Proc. R. Soc. A 471, 20150257 (2015)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Zhou, F., Claire, Q., King, R.D.: Predicting the geographical origin of music. In: IEEE ICDM, pp. 1115–1120 (2014)
Tsanas, A., Little, M.A., McSharry, P.E., Ramig, L.O.: Accurate telemonitoring of Parkinson’s disease progression by non-invasive speech tests. IEEE Trans. Biomed. Eng. 57(4), 884–893 (2010)
Little, M.A., McSharry, P.E., Hunter, E.J., Spielman, J., Ramig, L.O.: Suitability of dysphonia measurements for telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 56(4), 1015–1022 (2009)
Little, M.A., McSharry, P.E., Roberts, S.J., Costello, D.A.E., Moroz, I.M.: Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed. Eng. OnLine. 6, 23 (2007)
Frost, J.: Regression analysis: how do I interpret R-squared and assess the goodness-of-fit. The Minitab Blog (2013)
Gentleman, R., Carey, V.J.: Unsupervised machine learning. In: Bioconductor Case Studies, pp. 137–157. Springer, New York (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, S., Liu, C., Wang, Z., Wu, S., Xiao, K. (2020). Regression with Support Vector Machines and VGG Neural Networks. In: Hassanien, A., Azar, A., Gaber, T., Bhatnagar, R., F. Tolba, M. (eds) The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019). AMLTA 2019. Advances in Intelligent Systems and Computing, vol 921. Springer, Cham. https://doi.org/10.1007/978-3-030-14118-9_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-14118-9_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-14117-2
Online ISBN: 978-3-030-14118-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)