Let’s Open the Black Box of Deep Learning!
Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. This tutorial tries to figure out what are the real mechanisms that make this technique a breakthrough with respect to the past. To this end, we will review what is a neural network, how we can learn its parameters by using observational data, some of the most common architectures (CNN, LSTM, etc.) and some of the tricks that have been developed during the last years.
KeywordsDeep learning Automatic differentiation Optimization
This work was partially supported by TIN2015-66951-C2 and SGR 1219 grants. I thank the anonymous reviewers for their careful reading of the manuscript and their many insightful comments and suggestions. I also want to acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU. Finally, I would like to express my sincere appreciation to the organizers of the Seventh European Business Intelligence & Big Data Summer School.
- 1.Hebb, D.O.: The Organization of Behavior. Wiley & Sons, New York (1949)Google Scholar
- 4.Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS 2012), pp. 1097–1105. Curran Associates Inc., USA (2012)Google Scholar
- 6.Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-2010), pp. 807–814 (2010)Google Scholar
- 7.Csáji, B.C.: Approximation with Artificial Neural Networks, vol. 24, p. 48. Faculty of Sciences, Etvs Lornd University, Hungary (2001)Google Scholar
- 8.Sutton, R.S.: Two problems with backpropagation and other steepest-descent learning procedures for networks. In: Proceedings of 8th Annual Conference Cognitive Science Society (1986)Google Scholar
- 10.Kingma, D., Jimmy B.A.: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
- 12.Mozer, M.C.: A focused backpropagation algorithm for temporal pattern recognition. In: Chauvin, Y., Rumelhart, D. Backpropagation: Theory, Architectures, and Applications, pp. 137–169. ResearchGate, Lawrence Erlbaum Associates, Hillsdale (1995). Accessed 21 Aug 2017Google Scholar
- 14.Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling arXiv:1412.3555 (2014)
- 15.Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016)Google Scholar
- 16.Paszke, A., Gross, S., Chintala, S.: PyTorch. GitHub repository (2017). https://github.com/orgs/pytorch/people
- 17.Chollet, F.: (2017). Keras (2015). http://keras.io
- 19.Nielsen, M.A.: Neural Networks and Deep Learning. Determination Press (2015)Google Scholar