Advertisement

Dynamic Risk Assessment for Vehicles of Higher Automation Levels by Deep Learning

  • Patrik FethEmail author
  • Mohammed Naveed Akram
  • René Schuster
  • Oliver Wasenmüller
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)

Abstract

Vehicles of higher automation levels require the creation of situation awareness. One important aspect of this situation awareness is an understanding of the current risk of a driving situation. In this work, we present a novel approach for the dynamic risk assessment of driving situations based on images of a front stereo camera using deep learning. To this end, we trained a deep neural network with recorded monocular images, disparity maps and a risk metric for diverse traffic scenes. Our approach can be used to create the aforementioned situation awareness of vehicles of higher automation levels and can serve as a heterogeneous channel to systems based on radar or lidar sensors that are used traditionally for the calculation of risk metrics.

References

  1. 1.
    Berthelot, A., Tamke, A., Dang, T., Breuel, G.: A novel approach for the probabilistic computation of time-to-collision. IEEE Intelligent Vehicles Symposium (2012)Google Scholar
  2. 2.
    Bojarski, M., et al.: End to end learning for self-driving cars (2016)Google Scholar
  3. 3.
    Bojarski, M., et al.: Explaining how a deep neural network trained with end-to-end learning steers a car (2017)Google Scholar
  4. 4.
    Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: IEEE International Conference on Computer Vision (2015)Google Scholar
  5. 5.
    Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16 (2017)Google Scholar
  6. 6.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: IEEE International Conference on Computer Vision (2011)Google Scholar
  7. 7.
    Feth, P.: Ein werkzeug zur entwicklung und zum vergleich von verfahren zur dynamischen risikobewertung für aktive sicherheitssysteme. In: International Commercial Vehicle Technology Symposium Kaiserslautern (2018)Google Scholar
  8. 8.
    Feth, P., Schneider, D., Adler, R.: A conceptual safety supervisor definition and evaluation framework for autonomous systems. In: International Conference on Computer Safety, Reliability and Security (2017)Google Scholar
  9. 9.
    Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2008)Google Scholar
  10. 10.
    ISO: Intelligent transport systems - forward vehicle collision mitigation systems - operation, performance, and verification requirements (2013)Google Scholar
  11. 11.
    Jungnickel, R., Köhler, M., Korf, F.: Efficient automotive grid maps using a sensor ray based refinement process. In: IEEE Intelligent Vehicles Symposium (2016)Google Scholar
  12. 12.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  13. 13.
    Pomerleau, D.A.: Alvinn: an autonomous land vehicle in a neural network. In: Advances in Neural Information Processing Systems (1989)Google Scholar
  14. 14.
    Richter, S.R., Hayder, Z., Koltun, V.: Playing for benchmarks. In: International Conference on Computer Vision (2017)Google Scholar
  15. 15.
    Schreier, M., Willert, V., Adamy, J.: Bayesian, maneuver-based, long-term trajectory prediction and criticality assessment for driver assistance systems. In: IEEE Intelligent Vehicles Symposium (2014)Google Scholar
  16. 16.
    Schuster, R., Wasenmüller, O., Kuschk, G., Bailer, C., Stricker, D.: SceneFlowFields: dense interpolation of sparse scene flow correspondences. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2018)Google Scholar
  17. 17.
    Taylor, G., Chosak, A., Brewer, P.: OVVV: Using virtual worlds to design and evaluate surveillance systems. IEEE Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  18. 18.
    Thrun, S.: Stanley: the robot that won the DARPA grand challenge. J. Field Robot. 23(9), 661–692 (2006)CrossRefGoogle Scholar
  19. 19.
    Wachenfeld, W., Junietz, P., Wenzel, R., Winner, H.: The worst-time-to-collision metric for situation identification. In: IEEE Intelligent Vehicles Symposium (2016)Google Scholar
  20. 20.
    Wang, Y., Kato, J.: Collision risk rating of traffic scene from dashboard cameras. In: International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–6 (2017)Google Scholar
  21. 21.
    Yoshida, T., Wasenmüller, O., Stricker, D.: Time-of-flight sensor depth enhancement for automotive exhaust gas. In: IEEE International Conference on Image Processing (ICIP) (2017)Google Scholar
  22. 22.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision (ECCV) (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Patrik Feth
    • 1
    Email author
  • Mohammed Naveed Akram
    • 1
  • René Schuster
    • 2
  • Oliver Wasenmüller
    • 2
  1. 1.Fraunhofer Institute for Experimental Software EngineeringKaiserslauternGermany
  2. 2.DFKI - German Research Center for Artificial IntelligenceKaiserslauternGermany

Personalised recommendations