Advertisement

Imitation Learning of Path-Planned Driving Using Disparity-Depth Images

  • Sascha HornauerEmail author
  • Karl Zipser
  • Stella Yu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11133)

Abstract

Sensor data representation in autonomous driving is a defining factor for the final performance and convergence of End-to-End trained driving systems. When theoretically a network, trained in a perfect way, should be able to abstract the most useful information from camera data depending on the task, practically this is a challenge. Therefore, many approaches explore leveraging human designed intermediate representations as segmented images. We continue work in the field of depth-image based steering angle prediction and compare networks trained purely on either RGB-stereo images or depth-from-stereo (disparity) images. Since no dedicated depth sensor is used, we consider this as a pixel grouping method where pixel are labeled by their stereo disparity instead of relying on human segment annotations.

Keywords

End-to-End training Autonomous driving Path planning Collision avoidance Depth images Transfer learning 

References

  1. 1.
    Bojarski, M., et al.: End to End Learning for Self-Driving Cars. arXiv preprint arXiv:1604.07316, pp. 1–9 (2016)
  2. 2.
    Chowdhuri, S., Pankaj, T., Zipser, K.: Multi-Modal Multi-Task Deep Learning for Autonomous Driving. arXiv preprint arXiv:1709.05581 (2017)
  3. 3.
    Codevilla, F., Müller, M., Dosovitskiy, A., López, A., Koltun, V.: End-to-end Driving via Conditional Imitation Learning. arXiv preprint arXiv:1710.02410 (2017). To be published in proceedings - IEEE International Conference on Robotics and Automation (2018)
  4. 4.
    Hirschmuller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 807–814 (2005).  https://doi.org/10.1109/CVPR.2005.56
  5. 5.
    Hou, E., Hornauer, S., Zipser, K.: Fast Recurrent Fully Convolutional Networks for Direct Perception in Autonomous Driving. arXiv preprint arXiv:1711.06459 (2017)
  6. 6.
    Hubmann, C., Becker, M., Althoff, D., Lenz, D., Stiller, C.: Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), pp. 1671–1678 (2017).  https://doi.org/10.1109/IVS.2017.7995949
  7. 7.
    Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 MB model size. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 9351, pp. 324–331 (2016).  https://doi.org/10.1007/978-3-319-24574-4_39Google Scholar
  8. 8.
    Kahn, G., Villaflor, A., Ding, B., Abbeel, P., Levine, S.: Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In: Proceedings of the IEEE International Conference on Robotics and Automation (2018)Google Scholar
  9. 9.
    Kong, J., Pfeiffer, M., Schildbach, G., Borrelli, F.: Kinematic and dynamic vehicle models for autonomous driving control design. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 1094–1099. IEEE (2015).  https://doi.org/10.1109/IVS.2015.7225830
  10. 10.
    Kuderer, M., Gulati, S., Burgard, W.: Learning driving styles for autonomous vehicles from demonstration. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2641–2646 (2015).  https://doi.org/10.1109/ICRA.2015.7139555
  11. 11.
    LeCun, Y., Muller, U., Ben, J., Cosatto, E., Flepp, B.: Off-road obstacle avoidance through end-to-end learning. In: Advances in Neural Information Processing Systems, vol. 18, p. 739 (2006)Google Scholar
  12. 12.
    Mirowski, P., et al.: Learning to Navigate in Complex Environments. Accepted for poster presentation ICRL 2017 (2016)Google Scholar
  13. 13.
    Pan, Y., et al.: Agile off-road autonomous driving using end-to-end deep imitation learning. In: Robotics: Science and Systems 2018 (2018).  https://doi.org/10.15607/RSS.2018.XIV.056, http://arxiv.org/abs/1709.07174
  14. 14.
    Pfeiffer, M., Schaeuble, M., Nieto, J., Siegwart, R., Cadena, C.: From perception to decision : a data-driven approach to end-to-end motion planning for autonomous ground robots. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1527–1533. IEEE, Singapore (2017).  https://doi.org/10.1109/ICRA.2017.7989182
  15. 15.
    Plessen, M.G., Bernardini, D., Esen, H., Bemporad, A.: Spatial-based predictive control and geometric corridor planning for adaptive cruise control coupled with obstacle avoidance. IEEE Trans. Control Syst. Technol. 26(1), 38–50 (2018).  https://doi.org/10.1109/TCST.2017.2664722CrossRefGoogle Scholar
  16. 16.
    Pomerleau, D.a.: Alvinn: an autonomous land vehicle in a neural network. In: Advances in Neural Information Processing Systems, vol. 1, pp. 305–313 (1989)Google Scholar
  17. 17.
    Xu, H., Gao, Y., Yu, F., Darrell, T.: End-to-end learning of driving models from large-scale video datasets. In: 2017 Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 2174–2182 (2017).  https://doi.org/10.1109/CVPR.2017.376

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.International Computer Science InstituteUniversity of CaliforniaBerkeleyUSA

Personalised recommendations