A Stacked Denoising Autoencoder Based on Supervised Pre-training

  • Xiumei Wang
  • Shaomin MuEmail author
  • Aiju Shi
  • Zhongqi Lin
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 670)


Deep learning has attracted much attention because of its ability to extract complex features automatically. Unsupervised pre-training plays an important role in the process of deep learning, but the monitoring information provided by the sample of labeling is still very important for feature extraction. When the regression forecasting problem with a small amount of data is processed, the advantage of unsupervised learning is not obvious. In this paper, the pre-training phase of the stacked denoising autoencoder was changed from unsupervised learning to supervised learning, which can improve the accuracy of the small sample prediction problem. Through experiments on UCI regression datasets, the results show that the improved stacked denoising autoencoder is better than the traditional stacked denoising autoencoder.


Deep learning Stacked denoising autoencoder Supervised learning Regression forecast 


  1. 1.
    Bai Y., Chen Z., Xie .J, li C.: Daily reservoir inflow forecasting using multiscale deep feature learning with hybrid models. Journal of Hydrology. 532, 193–206 (2015).Google Scholar
  2. 2.
    Hinton G., Osindero S., Teh Y.: A fast learning algorithm for deep belief nets. Neural Computation. 18(7), 1527–1554 (2006).Google Scholar
  3. 3.
    Verbancsics P., Harguess J.: Image classification using generative neuron evolution for deep learning. Winter Conference on Applications of Computer Vision. 488–493 (2015).Google Scholar
  4. 4.
    Li D., Hinton G., Kingsbury B.: New types of deep neural network learning for speech recognition and related applications: an overview. International Conference on Acoustics, Speech and Signal Processing. IEEE 8599–8603 (2013).Google Scholar
  5. 5.
    Chen Y., Zheng D., Zhao T.: Chinese relation extraction based on deep belief nets. Journal of Software. 23(10), 2572–2585 (2012).Google Scholar
  6. 6.
    Yu k., Jia L., Chen Y., Xu W.: deep learning: Yesterday, Today, and Tomorrow. Journal of Computer Research and Development. 50(09), 1799–1804 (2013).Google Scholar
  7. 7.
    Jiang Z., Chen Y., Gao L.: A supervised dynamic topic model. Acta Scientiarum Naturalium Universitatis Pekinensis. 51(02), 367–376 (2015).Google Scholar
  8. 8.
    Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).Google Scholar
  9. 9.
    Vincent P., Larochelle H., Bengio Y., Manzagol P.: Extracting and composing robust features with denoising autoencoders. International Conference, Helsinki, Finland, June. Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Xiumei Wang
    • 1
  • Shaomin Mu
    • 1
    Email author
  • Aiju Shi
    • 2
  • Zhongqi Lin
    • 1
  1. 1.College of Information Science and EngineeringShandong Agricultural UniversityTaianChina
  2. 2.College of Chemistry and Material ScienceShandong Agricultural UniversityTaianChina

Personalised recommendations