Advertisement

Training Deep Autoencoder via VLC-Genetic Algorithm

  • Qazi Sami Ullah KhanEmail author
  • Jianwu Li
  • Shuyang Zhao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)

Abstract

Recently, both supervised and unsupervised deep learning techniques have accomplished notable results in various fields. However neural networks with back-propagation are liable to trapping at local minima. Genetic algorithms have been popular as a class of optimization techniques which are good at exploring a large and complex space in an intelligent way to find values close to the global optimum.

In this paper, a variable length chromosome genetic algorithm assisted deep autoencoder is proposed. Firstly, the training of autoencoder is done with the help of variable length chromosome genetic algorithm. Secondly, a classifier is used for the classification of encoded data and compare the classification accuracy with other state-of-the-art methods. The experimental results show that the proposed method achieves competitive results and produce sparser networks.

Keywords

Neural networks Genetic algorithm Variable length chromosome Deep autoencoder 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61271374).

References

  1. 1.
    Bengio, Y., LeCun, Y., et al.: Scaling learning algorithms towards AI. Large-scale Kernel Mach. 34, 1–41 (2007)Google Scholar
  2. 2.
    Utgoff Hinton, G.E., Osindero, S., Teh, Y.-W.: Many-layered learning. Neural Comput. 14, 2497–2529 (2002). MIT PressCrossRefGoogle Scholar
  3. 3.
    Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006). MIT PressCrossRefzbMATHMathSciNetGoogle Scholar
  4. 4.
    Freund, Y., Haussler, D.: Unsupervised learning of distributions of binary vectors using two layer networks, Computer Research Laboratory, University of California, Santa Cruz (1994)Google Scholar
  5. 5.
    Bengio, Y., Lamblin, P., Dan, P., et al.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, vol. 19, p. 153. MIT Press (2007)Google Scholar
  6. 6.
    Ranzato, M., Poultney, C., Chopra, S., et al.: Efficient learning of sparse representations with an energy-based model. In: Proceedings of NIPS (2007)Google Scholar
  7. 7.
    Bengio, Y., et al.: Learning deep architectures for AI. In: Foundations and Trends in Machine Learning, vol. 2, pp. 1–127. Now Publishers, Inc. (2009)Google Scholar
  8. 8.
    Weston, J., Ratle, F., Mobahi, H., Collobert, R.: Deep learning via semi-supervised embedding. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 639–655. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35289-8_34 CrossRefGoogle Scholar
  9. 9.
    Mobahi, H., Collobert, R., Weston, J.: Deep learning from temporal coherence in video. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 737–744. ACM (2009)Google Scholar
  10. 10.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks, vol. 313, pp. 504–507. American Association for the Advancement of Science (2006)Google Scholar
  11. 11.
    Vincent, P., Larochelle, H., Bengio, Y., et al.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)Google Scholar
  12. 12.
    Ahmed, A., Yu, K., Xu, W., Gong, Y., Xing, E.: Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 69–82. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88690-7_6 CrossRefGoogle Scholar
  13. 13.
    Osindero, S., Hinton, G.E.: Modeling image patches with a directed hierarchy of Markov random fields. In: Advances in Neural Information Processing Systems, pp. 1121–1128 (2008)Google Scholar
  14. 14.
    Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted Boltzmann machines for collaborative filtering. In: Proceedings of the 24th International Conference on Machine Learning, pp. 791–798. ACM (2007)Google Scholar
  15. 15.
    Hinton, G.E., Salakhutdinov, R.R.: Using deep belief nets to learn covariance kernels for Gaussian processes. In: Advances in Neural Information Processing Systems, pp. 1249–1256 (2008)Google Scholar
  16. 16.
    Levner, I.: Data Driven Object Segmentation. Citeseer (2009)Google Scholar
  17. 17.
    Mnih, A., Hinton, G.E.: A scalable hierarchical distributed language model. In: Advances in Neural Information Processing Systems, pp. 1081–1088 (2009)Google Scholar
  18. 18.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)Google Scholar
  19. 19.
    Ranzato, M.A., Szummer, M.: Semi-supervised learning of compact document representations with deep networks. In: Proceedings of the 25th International Conference on Machine Learning, pp. 792–799. ACM (2008)Google Scholar
  20. 20.
    David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 1451–1452. ACM (2014)Google Scholar
  21. 21.
    LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). IEEECrossRefGoogle Scholar
  22. 22.
    David, S.J., Whitley, D., Eshelman, L.J.: Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: Combinations of Genetic Algorithms and Neural Networks, pp. 1–37. IEEE (1992)Google Scholar
  23. 23.
    Golberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Boston (1989)Google Scholar
  24. 24.
    Koehn, P.: Combining Genetic Algorithms and Neural Networks: The Encoding Problem. Citeseer (1994)Google Scholar
  25. 25.
    Schiffmann, W., Joost, M., Werner, R.: Application of genetic algorithms to the construction of topologies for multilayer perceptrons. In: Albrecht, R.F., Reeves, C.R., Steele, N.C. (eds.) Artificial Neural Nets and Genetic Algorithms, pp. 675–682. Springer, Wien (1993). doi: 10.1007/978-3-7091-7533-0_98 CrossRefGoogle Scholar
  26. 26.
    Hancock, P.J.B., Smith, L.S.: Gannet: genetic design of a neural net for face recognition. In: Schwefel, H.-P., Männer, R. (eds.) PPSN 1990. LNCS, vol. 496, pp. 292–296. Springer, Heidelberg (1991). doi: 10.1007/BFb0029766 CrossRefGoogle Scholar
  27. 27.
    Bishop, J.M., Bushnell, M.J., Usher, A., et al.: Genetic optimisation of neural network architectures for colour recipe prediction. In: Albrecht, R.F., Reeves, C.R., Steele, N.C. (eds.) Artificial Neural Nets and Genetic Algorithms, pp. 719–725. Springer, Wien (1993). doi: 10.1007/978-3-7091-7533-0_104 CrossRefGoogle Scholar
  28. 28.
    Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: IJCAI 1989, vol. 89, pp. 762–767 (1989)Google Scholar
  29. 29.
    Zhang, M., Deng, Y., Chang, D.: A novel genetic clustering algorithm with variable-length chromosome representation. In: Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 1483–1484. ACM (2014)Google Scholar
  30. 30.
    Yahya, A.A., Osman, A., Ramli, A.R., et al.: Feature selection for high dimensional data: an evolutionary filter approach. Citeseer (2011)Google Scholar
  31. 31.
    Salakhutdinov, R., Hinton, G.: Semantic hashing. Int. J. Approximate Reasoning 50, 969–978 (2009). ElsevierCrossRefGoogle Scholar
  32. 32.
    Brie, A.H., Morignot, P.: Genetic planning using variable length chromosomes. In: ICAPS 2005, pp. 320–329 (2005)Google Scholar
  33. 33.
    Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: ICML Unsupervised and Transfer Learning, vol. 27, p. 1 (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science and TechnologyBeijing Institute of TechnologyBeijingChina

Personalised recommendations