Improving Image Classification Robustness Using Predictive Data Augmentation

  • Subramani Palanisamy HarisubramanyabalajiEmail author
  • Shafiq ur Réhman
  • Mattias Nyberg
  • Joakim Gustavsson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)


Safer autonomous navigation might be challenging if there is a failure in sensing system. Robust classifier algorithm irrespective of camera position, view angles, and environmental condition of an autonomous vehicle including different size & type (Car, Bus, Truck, etc.) can safely regulate the vehicle control. As training data play a crucial role in robust classification of traffic signs, an effective augmentation technique enriching the model capacity to withstand variations in urban environment is required. In this paper, a framework to identify model weakness and targeted augmentation methodology is presented. Based on off-line behavior identification, exact limitation of a Convolutional Neural Network (CNN) model is estimated to augment only those challenge levels necessary for improved classifier robustness. Predictive Augmentation (PA) and Predictive Multiple Augmentation (PMA) methods are proposed to adapt the model based on acquired challenges with a high numerical value of confidence. We validated our framework on two different training datasets and with 5 generated test groups containing varying levels of challenge (simple to extreme). The results show impressive improvement by \(\approx \) 5–20% in overall classification accuracy thereby keeping their high confidence.


Safety-risk assessment Predictive augmentation Convolutional neural network Traffic sign classification Real-time challenges 



The authors would like to thank Nazre Batool, Christopher Norén for Heavy vehicle data, Sribalaji CA, Ashokan Arumugam, and Abhishek S for their constructive comments.


  1. 1.
    Autonomous-vehicle technology is advancing ever faster, The Economist - Special report. Accessed 1 Mar 2018
  2. 2.
    Nowakowski, C., Shladover, S.E., Tan, H.S.: Heavy vehicle automation: human factors lessons learned. Procedia Manuf. 3, 2945–2952 (2015)CrossRefGoogle Scholar
  3. 3.
    44 Corporations Working On Autonomous Vehicles, Autotech (CB Insights). Accessed 18 May 2017
  4. 4.
    Heineke, K., Kampshoff, P., Mkrtchyan, A., Shao, E.: Self-driving car technology: when will the robots hit the road? Mckinsey & Company. Accessed May 2017
  5. 5.
    Nguwi, Y.Y., Kouzani, A.Z.: Detection and classification of road signs in natural environments. Neural Comput. Appl. 17(3), 265–289 (2008)CrossRefGoogle Scholar
  6. 6.
    Lafuente-Arroyo, S., Gil-Jimenez, P., Maldonado-Bascon, R., Lopez-Ferreras, F., Maldonado-Bascon, S.: Traffic sign shape classification evaluation I: SVM using distance to borders. In: IEEE Intelligent Vehicles Symposium, pp. 557–562 (2005)Google Scholar
  7. 7.
    Mathias, M., Timofte, R., Benenson, R., Van Gool, L.: Traffic sign recognition - how far are we from the solution?. In: IEEE International Joint conference on Neural Networks (IJCNN), pp. 1–8 (2013)Google Scholar
  8. 8.
    CireşAn, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)CrossRefGoogle Scholar
  9. 9.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: IEEE International Joint conference on Neural Networks (IJCNN), pp. 1453–1460 (2011)Google Scholar
  10. 10.
    Timofte, R., Zimmermann, K., Van Gool, L.: Multi-view traffic sign detection, recognition, and 3D localisation. Mach. Vis. Appl. 25(3), 633–647 (2014)CrossRefGoogle Scholar
  11. 11.
    Temel, D., Kwon, G., Prabhushankar, M., AlRegib, G.: CURE-TSR: Challenging Unreal and Real Environments for Traffic Sign Recognition. arXiv preprint arXiv:1712.02463 (2017)
  12. 12.
    Sermanet, P., LeCun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: IEEE International Joint Conference on Neural Networks (IJCNN), pp. 2809–2813 (2011)Google Scholar
  13. 13.
    Bansal, A., Badino, H., Huber, D.: Understanding how camera configuration and environmental conditions affect appearance-based localization. In: Intelligent Vehicles Symposium Proceedings, pp. 800–807 (2014)Google Scholar
  14. 14.
    Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)
  15. 15.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  16. 16.
    Zeng, Y., Xu, X., Shen, D., Fang, Y., Xiao, Z.: Traffic sign recognition using kernel extreme learning machines with deep perceptual features. IEEE Trans. Intell. Transp. Syst. 18(6), 1647–1653 (2017)Google Scholar
  17. 17.
    Jin, J., Fu, K., Zhang, C.: Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 15(5), 1991–2000 (2014)CrossRefGoogle Scholar
  18. 18.
    Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 2nd edn. Prentice Hall, Upper Saddle River (2012)Google Scholar
  19. 19.
    Joshi, P.: OpenCV with Python By Example. Packt Publishing Ltd, Birmingham (2015)Google Scholar
  20. 20.
    Mordvintsev, A., Abid, K.: Opencv-python tutorials documentation. Accessed 5 Nov 2017

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Subramani Palanisamy Harisubramanyabalaji
    • 1
    • 3
    Email author
  • Shafiq ur Réhman
    • 1
  • Mattias Nyberg
    • 2
    • 3
  • Joakim Gustavsson
    • 2
    • 3
  1. 1.i2labUmeå UniversityUmeåSweden
  2. 2.KTH Royal Institute of TechnologyStockholmSweden
  3. 3.Scania CV ABSödertäljeSweden

Personalised recommendations