Characterizing Perception Module Performance and Robustness in Production-Scale Autonomous Driving System

  • Alessandro Toschi
  • Mustafa Sanic
  • Jingwen LengEmail author
  • Quan Chen
  • Chunlin Wang
  • Minyi Guo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11783)


Autonomous driving is a field that gathers many interests in academics and industry and represents one of the most important challenges of next years. Although individual algorithms of autonomous driving have been studied and well understood, there is still a lack of study for those tasks in a production-scale system. In this work, we profile and analyze the perception module of the open-source autonomous driving system Apollo, developed by Baidu, in terms of response time and robustness against sensor errors. The perception module is fundamental to the proper functioning and safety of autonomous driving, which relies on several sensors, such as LIDARs and cameras, for detecting obstacles and perceiving the surrounding environment. We identify the computation characteristics and potential bottlenecks in the perception module. Furthermore, we design multiple noise models for the camera frames and LIDAR cloud points to test the robustness of the whole module in terms of accuracy drop against a noise-free baseline. Our insights are useful for future performance and robustness optimization of autonomous driving system.


Autonomous driving Robustness analysis Performance profiling Deep neural networks 



This work is supported by National Key R&D Program of China (2018YFB1305900); the National Natural Science Foundation of China under Grant (61702328 and 61602301); Microsoft Research Asia Research Grant.


  1. 1.
    Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. arXiv preprint arXiv:1711.03938 (2017)
  2. 2.
    CommaAI: Openpilot: Open source driving agent (2019).
  3. 3.
    Baidu: Apollo open platform (2019).
  4. 4.
  5. 5.
    Google: Protocol buffers (2008).
  6. 6.
    Lin, S.-C., et al.: The architectural implications of autonomous driving: constraints and acceleration. ACM SIGPLAN Not. 53, 751–766 (2018)CrossRefGoogle Scholar
  7. 7.
    Jack Stewart, W.: People keep confusing their Teslas for self-driving cars (2018).
  8. 8.
    Tencent Keen Security Lab: Experimental security research of Tesla autopilot (2019).
  9. 9.
    Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019)
  10. 10.
    Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: Proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. ACM (2017)Google Scholar
  11. 11.
    Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  12. 12.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  13. 13.
    Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
  14. 14.
    National Ocean Service: What is lidar? (2018).
  15. 15.
  16. 16.
    Kuhn, H.W.: The hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1–2), 83–97 (1955)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Cho, H., Seo, Y.-W., Kumar, B.V., Rajkumar, R.R.: A multi-sensor fusion system for moving object detection and tracking in urban driving environments. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1836–1843. IEEE (2014)Google Scholar
  18. 18.
  19. 19.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: The kitti vision benchmark suite (2015).
  20. 20.
    Michael Larabel, Phoronix: NVIDIA’s Jetson AGX Xavier Carmel performance vs. low-power x86 processors (2019).
  21. 21.
    Jha, S., et al.: Kayotee: a fault injection-based system to assess the safety and reliability of autonomous vehicles to faults and errors. In: 3rd IEEE International Workshop on Automotive Reliability & Test (2018)Google Scholar
  22. 22.
    Jha, S., Banerjee, S.S., Cyriac, J., Kalbarczyk, Z.T., Iyer, R.K.: AVFI: fault injection for autonomous vehicles. In: 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 55–56. IEEE (2018)Google Scholar
  23. 23.
  24. 24.
    Garg, K., Nayar, S.K.: Photorealistic rendering of rain streaks. In: ACM Transactions on Graphics (TOG), vol. 25, pp. 996–1002. ACM (2006)Google Scholar
  25. 25.
    Rubaiyat, A.H.M., Qin, Y., Alemzadeh, H.: Experimental resilience assessment of an open-source driving agent. In: 2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 54–63. IEEE (2018)Google Scholar
  26. 26.
    Jha, S., et al.: ML-based fault injection for autonomous vehicles: a case for Bayesian fault injection, June 2019Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Alessandro Toschi
    • 1
  • Mustafa Sanic
    • 1
  • Jingwen Leng
    • 1
    Email author
  • Quan Chen
    • 1
  • Chunlin Wang
    • 2
  • Minyi Guo
    • 1
  1. 1.Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Chuxiong Normal UniversityChuxiong CityChina

Personalised recommendations