Advertisement

Dropout-Based Active Learning for Regression

  • Evgenii TsymbalovEmail author
  • Maxim Panov
  • Alexander Shapeev
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11179)

Abstract

Active learning is relevant and challenging for high-dimensional regression models when the annotation of the samples is expensive. Yet most of the existing sampling methods cannot be applied to large-scale problems, consuming too much time for data processing. In this paper, we propose a fast active learning algorithm for regression, tailored for neural network models. It is based on uncertainty estimation from stochastic dropout output of the network. Experiments on both synthetic and real-world datasets show comparable or better performance (depending on the accuracy metric) as compared to the baselines. This approach can be generalized to other deep learning architectures. It can be used to systematically improve a machine-learning model as it offers a computationally efficient way of sampling additional data.

Keywords

Regression Active learning Uncertainty quantification Neural networks Dropout 

Notes

Acknowledgements

The work was supported by the Skoltech NGP Program No. 2016-7/NGP (a Skoltech-MIT joint project).

References

  1. 1.
    Rasmussen, C.E.: Gaussian processes in machine learning. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) ML 2003. LNCS (LNAI), vol. 3176, pp. 63–71. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-28650-9_4CrossRefGoogle Scholar
  2. 2.
    Szegedy, C., et al.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, vol. 4. (2017)Google Scholar
  3. 3.
    Sainath, T.N., et al.: Deep convolutional neural networks for LVCSR. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2013)Google Scholar
  4. 4.
    Baldi, P., Sadowski, P., Whiteson, D.: Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5, 4308 (2014)CrossRefGoogle Scholar
  5. 5.
    Anjos, O., et al.: Neural networks applied to discriminate botanical origin of honeys. Food Chem. 175, 128–136 (2015)CrossRefGoogle Scholar
  6. 6.
    Schütt, K.T., et al.: Quantum-chemical insights from deep tensor neural networks. Nat. Commun. 8, 13890 (2017)CrossRefGoogle Scholar
  7. 7.
    Hinton, G.E., et al.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
  8. 8.
    Tieleman, T., Hinton, G.: Lecture 65.-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26–31 (2012)Google Scholar
  9. 9.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  10. 10.
    Settles, B.: Active learning. Synth. Lect. Artif. Intell. Mach. Learn. 6(1), 1–114 (2012)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fedorov, V.: Theory of Optimal Experiments. Elsevier, Amsterdam (1972)Google Scholar
  12. 12.
    Forrester, A., Keane, A.: Engineering Design via Surrogate Modelling: A Practical Guide. Wiley, Hoboken (2008)CrossRefGoogle Scholar
  13. 13.
    Sacks, J., et al.: Design and analysis of computer experiments. Stat. Sci. 4, 409–423 (1989)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Burnaev, E., Panov, M.: Adaptive design of experiments based on gaussian processes. In: Gammerman, A., Vovk, V., Papadopoulos, H. (eds.) SLDS 2015. LNCS (LNAI), vol. 9047, pp. 116–125. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-17091-6_7CrossRefGoogle Scholar
  15. 15.
    Neal, R.M.: Bayesian Learning for Neural Networks, vol. 118. Springer, New York (2012)zbMATHGoogle Scholar
  16. 16.
    Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory. ACM (1992)Google Scholar
  17. 17.
    Melville, P., Mooney, R.J.: Diverse ensembles for active learning. In: Proceedings of the Twenty-First International Conference on Machine Learning. ACM (2004)Google Scholar
  18. 18.
    Mamitsuka, N.A.H.: Query learning strategies using boosting and bagging. In: Machine Learning: Proceedings of the Fifteenth International Conference (ICML 1998), vol. 1. Morgan Kaufmann Publishers Inc. (1998)Google Scholar
  19. 19.
    Li, H., Wang, X., Ding, S.: Research and development of neural network ensembles: a survey. Artif. Intell. Rev. 49(4), 455–479 (2018)CrossRefGoogle Scholar
  20. 20.
    Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning (2016)Google Scholar
  22. 22.
    Maeda, S.: A Bayesian encourages dropout. arXiv preprint arXiv:1412.7003 (2014)
  23. 23.
    Gal, Y.: Uncertainty in Deep Learning. University of Cambridge, Cambridge (2016)Google Scholar
  24. 24.
    Kampffmeyer, M., Salberg, A.-B., Jenssen, R.: Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In: CVPRW IEEE Conference (2016)Google Scholar
  25. 25.
    Gal, Y., Islam, R., Ghahramani, Z.: Deep Bayesian active learning with image data. arXiv preprint arXiv:1703.02910 (2017)
  26. 26.
    Fernandes, K., Vinagre, P., Cortez, P.: A proactive intelligent decision support system for predicting the popularity of online news. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015. LNCS (LNAI), vol. 9273, pp. 535–546. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23485-4_53CrossRefGoogle Scholar
  27. 27.
    Graf, F., Kriegel, H.-P., Schubert, M., Pölsterl, S., Cavallaro, A.: 2D image registration in CT images using radial image descriptors. In: Fichtinger, G., Martel, A., Peters, T. (eds.) MICCAI 2011. LNCS, vol. 6892, pp. 607–614. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23629-7_74CrossRefGoogle Scholar
  28. 28.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of ICML, vol. 30, no. 1 (2013)Google Scholar
  29. 29.
    Al-Rfou, R., et al.: Theano: a Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, vol. 472, p. 473 (2016)
  30. 30.
    Dieleman, S., et al.: Lasagne: first release, August 2015 (2016).  https://doi.org/10.5281/zenodo.27878
  31. 31.
    Dua, D., Karra Taniskidou, E.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2017). http://archive.ics.uci.edu/ml
  32. 32.
    Buza, K.: Feedback prediction for blogs. In: Spiliopoulou, M., Schmidt-Thieme, L., Janning, R. (eds.) Data Analysis, Machine Learning and Knowledge Discovery. SCDAKO, pp. 145–152. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-01595-8_16CrossRefGoogle Scholar
  33. 33.
    Nugteren, C., Codreanu, V.: CLTune: a generic auto-tuner for OpenCL kernels. In: 2015 IEEE 9th International Symposium on Embedded Multicore/Many-Core Systems-on-Chip (MCSoC). IEEE (2015)Google Scholar
  34. 34.
    Bertin-Mahieux, T., et al.: The million song dataset. In: ISMIR, vol. 2, no. 9 (2011)Google Scholar
  35. 35.
    Shannon, P., et al.: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 13(11), 2498–2504 (2003)CrossRefGoogle Scholar
  36. 36.
    Rosenbrock, H.H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3(3), 175–184 (1960)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Evgenii Tsymbalov
    • 1
    Email author
  • Maxim Panov
    • 1
  • Alexander Shapeev
    • 1
  1. 1.Skolkovo Institute of Science and Technology (Skoltech)MoscowRussia

Personalised recommendations