Quantitative Projection Coverage for Testing ML-enabled Autonomous Systems

  • Chih-Hong ChengEmail author
  • Chung-Hao HuangEmail author
  • Hirotoshi Yasuoka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11138)


Systematically testing models learned from neural networks remains a crucial unsolved barrier to successfully justify safety for autonomous vehicles engineered using data-driven approach. We propose quantitative k-projection coverage as a metric to mediate combinatorial explosion while guiding the data sampling process. By assuming that domain experts propose largely independent environment conditions and by associating elements in each condition with weights, the product of these conditions forms scenarios, and one may interpret weights associated with each equivalence class as relative importance. Achieving full k-projection coverage requires that the data set, when being projected to the hyperplane formed by arbitrarily selected k-conditions, covers each class with number of data points no less than the associated weight. For the general case where scenario composition is constrained by rules, precisely computing k-projection coverage remains in NP. In terms of finding minimum test cases to achieve full coverage, we present theoretical complexity for important sub-cases and an encoding to 0-1 integer programming. We have implemented a research prototype that generates test cases for a visual object detection unit in automated driving, demonstrating the technological feasibility of our proposed coverage criterion.


  1. 1.
    Bojarski, M., et al.: End to end learning for self-driving cars. CoRR, abs/1604.07316 (2016)Google Scholar
  2. 2.
    Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)Google Scholar
  3. 3.
    Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730 (2015)Google Scholar
  4. 4.
    Cheng, C.-H., et al.: Neural networks for safety-critical applications challenges, experiments and perspectives. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2018, pp. 1005–1006. IEEE (2018)Google Scholar
  5. 5.
    Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: International Symposium on Automated Technology for Verification and Analysis, pp. 251–268. Springer, Berlin (2017)CrossRefGoogle Scholar
  6. 6.
    Colbourn, C.J.: Combinatorial aspects of covering arrays. Le Mat. 59(1, 2), 125–172 (2004)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). Scholar
  8. 8.
    Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). Scholar
  9. 9.
    Huval, B., et al.: An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716 (2015)
  10. 10.
    Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). Scholar
  11. 11.
    Kolter, J.Z., Wong, E.: Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851 (2017)
  12. 12.
    Lawrence, J., Kacker, R.N., Lei, Y., Kuhn, D.R., Forbes, M.: A survey of binary covering arrays. Electron. J. Comb. 18(1), 84 (2011)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Lei, Y., Tai, K.-C.: In-parameter-order: a test generation strategy for pairwise testing. In: Proceedings of the Third IEEE International High-Assurance Systems Engineering Symposium, 1998, pp. 254–261. IEEE (1998)Google Scholar
  14. 14.
    Lenz, D., Diehl, F., Troung Le, M., Knoll, A.: Deep neural networks for Markovian interactive scene prediction in highway scenarios. In: IEEE Intelligent Vehicles Symposium (IV) 2017. IEEE (2017)Google Scholar
  15. 15.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  16. 16.
    Moosavi Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  17. 17.
    Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. (CSUR) 43(2), 11 (2011)CrossRefGoogle Scholar
  18. 18.
    Seroussi, G., Bshouty, N.H.: Vector sets for exhaustive testing of logic circuits. IEEE Trans. Inf. Theory 34(3), 513–522 (1988)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
  20. 20.
    Sinha, A., Namkoong, H., Duchi, J.: Certifiable distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571 (2017)
  21. 21.
    Sun, L., Peng, C., Zhan, W., Tomizuka, M.: A fast integrated planning and control framework for autonomous driving via imitation learning. arXiv preprint arXiv:1707.02515 (2017)
  22. 22.
    Sun, Y., Huang, X., Kroening, D.: Testing deep neural networks. arXiv preprint arXiv:1803.04792 (2018)
  23. 23.
    Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. arXiv preprint arXiv:1805.00089 (2018)
  24. 24.
    Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
  25. 25.
    Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. arXiv preprint arXiv:1711.11585 (2017)
  26. 26.
    Weng, T.-W., et al.: Towards fast computation of certified robustness for ReLU networks. arXiv preprint arXiv:1804.09699 (2018)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.fortiss - Landesforschungsinstitut des Freistaats BayernMunichGermany

Personalised recommendations