Advertisement

Deep Imitation Learning: The Impact of Depth on Policy Performance

  • Parham M. KebriaEmail author
  • Abbas Khosravi
  • Syed Moshfeq Salaken
  • Ibrahim Hossain
  • H. M. Dipu Kabir
  • Afsaneh Koohestani
  • Roohallah Alizadehsani
  • Saeid Nahavandi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11301)

Abstract

This paper investigates the impact of network depth on the performance of imitation learning applied in the development of an end- to-end policy for controlling autonomous cars. The policy generates optimal steering commands from raw images taken from cameras attached to the car in a simulated environment. A convolutional neural network (CNN) is used to find the mapping between inputs (car images) and the desired steering angle. The CNN architecture is modified by changing the number of convolutional layers as well as the filter size. It is observed that the learned policy is capable of driving the car in the autonomous mode purely using visual information. In addition, simulation results indicate that deeper CNNs outperform shallower CNNs for learning and mimicking the human driver’s behavior. Surprisingly, the best performance is not achieved by the most complex CNN.

Keywords

Autonomous vehicle Imitation learning Simulation Depth 

References

  1. 1.
    Abbeel, P., Coates, A., Ng, A.Y.: Autonomous helicopter aerobatics through apprenticeship learning. Int. J. Robot. Res. 29(13), 1608–1639 (2010).  https://doi.org/10.1177/0278364910371999CrossRefGoogle Scholar
  2. 2.
    Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57(5), 469–483 (2009)CrossRefGoogle Scholar
  3. 3.
    Bojarski, M., et al.: End to end learning for self-driving cars (2016)Google Scholar
  4. 4.
    Cardamone, L., Loiacono, D., Lanzi, P.L.: Learning drivers for TORCS through imitation using supervised methods. In: 2009 IEEE Symposium on Computational Intelligence and Games, pp. 148–155 (7–10)Google Scholar
  5. 5.
    Deng, L., Yu, D.: Deep learning: methods and applications. Found. Trends Sig. Process. 7, 197–387 (2014)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Gandhi, D., Pinto, L., Gupta, A.: Learning to fly by crashing (2017)Google Scholar
  7. 7.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  8. 8.
    Hussein, A., Gaber, M.M., Elyan, E., Jayne, C.: Imitation learning: a survey of learning methods. ACM Comput. Surv. 50(2), 1–35 (2017).  https://doi.org/10.1145/3054912CrossRefGoogle Scholar
  9. 9.
    Innocenti, C., Lindn, H., Panahandeh, G., Svensson, L., Mohammadiha, N.: Imitation learning for vision-based lane keeping assistance (2017)Google Scholar
  10. 10.
    Khatami, A., Babaie, M., Khosravi, A., Tizhoosh, H., Nahavandi, S.: Parallel deep solutions for image retrieval from imbalanced medical imaging archives. Appl. Soft Comput. 63, 197–205 (2018)CrossRefGoogle Scholar
  11. 11.
    Khatami, A., Khosravi, A., Nguyen, T., Lim, C.P., Nahavandi, S.: Medical image analysis using wavelet transform and deep belief networks. Expert Syst. Appl. 86, 190–198 (2017)CrossRefGoogle Scholar
  12. 12.
    Khosravi, A., Nahavandi, S., Creighton, D.: A neural network-GARCH-based method for construction of prediction intervals. Electr. Power Syst. Res. 96, 185–193 (2013)CrossRefGoogle Scholar
  13. 13.
    Khosravi, A., Nahavandi, S., Creighton, D.: Quantifying uncertainties of neural network-based electricity price forecasts. Appl. Energy 112, 120–129 (2013)CrossRefGoogle Scholar
  14. 14.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, vol. 1, pp. 1097–1105. Curran Associates Inc., Lake Tahoe (2012)Google Scholar
  15. 15.
    Li, G., Mueller, M., Casser, V., Smith, N., Michels, D.L., Ghanem, B.: Teaching UAVs to race with observational imitation learning (2018)Google Scholar
  16. 16.
    Müller, M., Casser, V., Smith, N., Michels, D.L., Ghanem, B.: Teaching UAVs to race using Sim4CV (2017)Google Scholar
  17. 17.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529 (2015)CrossRefGoogle Scholar
  18. 18.
    Nahavandi, S.: Trusted autonomy between humans and robots: toward human-on-the-loop in robotics and autonomous systems. IEEE Syst. Man, Cybern. Mag. 3(1), 10–17 (2017)CrossRefGoogle Scholar
  19. 19.
    Narayanan, K.K., Posada, L.F., Hoffmann, F., Bertram, T.: Robot programming by demonstration. In: Ando, N., Balakirsky, S., Hemker, T., Reggiani, M., von Stryk, O. (eds.) SIMPAR 2010. LNCS (LNAI), vol. 6472, pp. 288–299. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-17319-6_28CrossRefGoogle Scholar
  20. 20.
    Ng, A.Y., Harada, D., Russell, S.J.: Policy invariance under reward transformations: theory and application to reward shaping. In: Proceedings of the Sixteenth International Conference on Machine Learning, pp. 278–287. Morgan Kaufmann Publishers Inc. (1999)Google Scholar
  21. 21.
    Nguyen, T., Khosravi, A., Creighton, D., Nahavandi, S.: Medical data classification using interval type-2 fuzzy logic system and wavelets. Appl. Soft Comput. 30, 812–822 (2015)CrossRefGoogle Scholar
  22. 22.
    Poulsen, A.P., Thorhauge, M., Funch, M.H., Risi, S.: DLNE: a hybridization of deep learning and neuroevolution for visual control. In: 2017 IEEE Conference on Computational Intelligence and Games (CIG), pp. 256–263 (2017)Google Scholar
  23. 23.
    Priesterjahn, S., Kramer, O., Weimer, A., Goebels, A.: Evolution of reactive rules in multi player computer games based on imitation. In: Wang, L., Chen, K., Ong, Y.S. (eds.) ICNC 2005. LNCS, vol. 3611, pp. 744–755. Springer, Heidelberg (2005).  https://doi.org/10.1007/11539117_105CrossRefGoogle Scholar
  24. 24.
    Salaken, S.M., Khosravi, A., Khatami, A., Nahavandi, S., Hosen, M.A.: Lung cancer classification using deep learned features on low population dataset. In: 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–5. IEEE (2017)Google Scholar
  25. 25.
    Saleh, K., Hossny, M., Nahavandi, S.: Intent prediction of vulnerable road users from motion trajectories using stacked LSTM network. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 327–332. IEEE (2017)Google Scholar
  26. 26.
    Saleh, K., Hossny, M., Nahavandi, S.: Towards trusted autonomous vehicles from vulnerable road users perspective. In: 2017 Annual IEEE International Systems Conference (SysCon), pp. 1–7. IEEE (2017)Google Scholar
  27. 27.
    Saunders, J., Nehaniv, C.L., Dautenhahn, K.: Teaching robots by moulding behavior and scaffolding the environment. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, Utah, USA, pp. 118–125. ACM (2006).  https://doi.org/10.1145/1121241.1121263
  28. 28.
    Silver, D., Bagnell, J.A., Stentz, A.: Applied imitation learning for autonomous navigation in complex natural terrain. In: Howard, A., Iagnemma, K., Kelly, A. (eds.) Field and Service Robotics, vol. 62, pp. 249–259. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-13408-1_23CrossRefGoogle Scholar
  29. 29.
    Zhang, T., et al.: Deep imitation learning for complex manipulation tasks from virtual reality teleoperation (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Parham M. Kebria
    • 1
    Email author
  • Abbas Khosravi
    • 1
  • Syed Moshfeq Salaken
    • 1
  • Ibrahim Hossain
    • 1
  • H. M. Dipu Kabir
    • 1
  • Afsaneh Koohestani
    • 1
  • Roohallah Alizadehsani
    • 1
  • Saeid Nahavandi
    • 1
  1. 1.Institute for Intelligent Systems Research and InnovationDeakin UniversityGeelongAustralia

Personalised recommendations