Evolutionary Algorithms for Convolutional Neural Network Visualisation
Deep Learning is based on deep neural networks trained over huge sets of examples. It enabled computers to compete with—or even outperform—humans at many tasks, from playing Go to driving vehicules.
Still, it remains hard to understand how these networks actually operate. While an observer sees any individual local behaviour, he gets little insight about their global decision-making process.
However, there is a class of neural networks widely used for image processing, convolutional networks, where each layer contains features working in parallel. By their structure, these features keep some spatial information across a network’s layers. Visualisation of this spatial information at different locations in a network, notably on input data that maximise the activation of a given feature, can give insights on the way the model works.
This paper investigates the use of Evolutionary Algorithms to evolve such input images that maximise feature activation. Compared with some pre-existing approaches, ours seems currently computationally heavier but with a wider applicability.
- 1.Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. CoRR abs/1311.2901 (2013)Google Scholar
- 2.Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
- 4.Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013)Google Scholar
- 5.Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.html
- 6.Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: going deeper into neural networks. Google AI Blog, June 2015. https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
- 7.Mordvintsev, A., Tyka, M., Olah, C.: DeepDream. GitHub code repository. https://github.com/google/deepdream
- 8.Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization. CoRR abs/1506.06579 (2015). http://yosinski.com/deepvis
- 11.Maitre, O., Kruger, F., Pallamidessi, J., et al.: EASEA. Github code repository (2008–2016). https://github.com/EASEA/easea
- 12.Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
- 13.Jia, Y., et al.: Caffe: a fast open framework for deep learning. GitHub code repository (2014–2018). https://github.com/BVLC/caffe/
- 14.Misc.: Model zoo. GitHub. https://github.com/BVLC/caffe/wiki/Model-Zoo
- 15.Hughes, D. (ed.): Moltke on the Art of War: Selected Writings. New edn. Presidio Press (1995). ISBN: 978-0891415756Google Scholar
- 16.Chollet, F., et al.: Keras. GitHub code repository (2015–2018). https://github.com/fchollet/keras
- 17.Varrette, S., Bouvry, P., Cartiaux, H., Georgatos, F.: Management of an academic HPC cluster: the UL experience. In: Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS 2014), Bologna, Italy, pp. 959–967. IEEE, July 2014. https://hpc.uni.lu
- 18.Simonyan, K., Zisserman, A.: 19-layer model from the arxiv paper: “very deep convolutional networks for large-scale image recognition”. Caffe Zoo/github gist (2014). https://gist.github.com/ksimonyan/3785162f95cd2d5fee77
- 19.Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017)Google Scholar
- 20.Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015). https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_047.pdf