Advertisement

Traffic Sign Recognition Using Visual Attributes and Bayesian Network

  • Hamed Habibi AghdamEmail author
  • Elnaz Jahani Heravi
  • Domenec Puig
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 598)

Abstract

Recognizing traffic signs is a crucial task in Advanced Driver Assistant Systems. Current methods for solving this problem are mainly divided into traditional classification approach based on hand-crafted features such as HOG and end-to-end learning approaches based on Convolutional Neural Networks (ConvNets). Despite a high accuracy achieved by ConvNets, they suffer from high computational complexity which restricts their application only on GPU enabled devices. In contrast, traditional classification approaches can be executed on CPU based devices in real-time. However, the main issue with traditional classification approaches is that hand-crafted features have a limited representation power. For this reason, they are not able to discriminate a large number of traffic signs. Consequently, they are less accurate than ConvNets. Regardless, both approaches do not scale well. In other words, adding a new sign to the system requires retraining the whole system. In addition, they are not able to deal with novel inputs such as the false-positive results produced by the detection module. In other words, if the input of these methods is a non-traffic sign image, they will classify it into one of the traffic sign classes. In this paper, we propose a coarse-to-fine method using visual attributes that is easily scalable and, importantly, it is able to detect the novel inputs and transfer its knowledge to a newly observed sample. To correct the misclassified attributes, we build a Bayesian network considering the dependency between the attributes and find their most probable explanation using the observations. Experimental results on a benchmark dataset indicates that our method is able to outperform the state-of-art methods and it also possesses three important properties of novelty detection, scalability and providing semantic information.

Keywords

Traffic sign recognition Visual attributes Bayesiannetwork Most probable explanation Sparse coding 

References

  1. 1.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012). Selected Papers from IJCNN 2011CrossRefGoogle Scholar
  2. 2.
    Piccioli, G., De Micheli, E., Parodi, P., Campani, M.: A robust method for road sign detection and recognition (1996)Google Scholar
  3. 3.
    Paclik, P., Novovicova, J., Duin, R.P.W.: Building road sign classifiers using trainable similarity measure. IEEE Trans. Intell. Transp. Syst. 7, 309–321 (2006)CrossRefGoogle Scholar
  4. 4.
    Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C.: Detection of traffic signs in real-world images: the German Traffic sign detection benchmark. In: International Joint Conference on Neural Networks (2013). Number 1288Google Scholar
  5. 5.
    Zaklouta, F., Stanciulescu, B.: Warning traffic sign recognition using a HOG-based K-d tree. In: 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 1019–1024 (2011)Google Scholar
  6. 6.
    Zaklouta, F., Stanciulescu, B.: Real-time traffic sign recognition in three stages. Robot. Auton. Syst. 62, 16–24 (2014). New Boundaries of RoboticsCrossRefGoogle Scholar
  7. 7.
    Sun, Z.L., Wang, H., Lau, W.S., Seet, G., Wang, D.: Application of BW-ELM model on traffic sign recognition. Neurocomputing 128, 153–159 (2014)CrossRefGoogle Scholar
  8. 8.
    Maldonado-Bascon, S., Lafuente-Arroyo, S., Gil-Jimenez, P., Gomez-Moreno, H., Lopez-Ferreras, F.: Road-sign detection and recognition based on support vector machines. IEEE Trans. Intell. Transp. Syst. 8, 264–278 (2007)CrossRefzbMATHGoogle Scholar
  9. 9.
    Bascón, S.M., Rodríguez, J.A., Arroyo, S.L., Caballero, A.F., López-Ferreras, F.: An optimization on pictogram identification for the road-sign recognition task using SVMs. Comput. Vis. Image Und. 114, 373–383 (2010)CrossRefGoogle Scholar
  10. 10.
    Liu, H., Liu, Y., Sun, F.: Traffic sign recognition using group sparse coding. Inf. Sci. 266, 75–89 (2014)CrossRefGoogle Scholar
  11. 11.
    Wang, G., Ren, G., Wu, Z., Zhao, Y., Jiang, L.: A hierarchical method for traffic sign classification with support vector machines. In: 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–6 (2013)Google Scholar
  12. 12.
    Mogelmose, A., Trivedi, M., Moeslund, T.: Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey. IEEE Trans. Intell. Transp. Syst. 13, 1484–1497 (2012)CrossRefGoogle Scholar
  13. 13.
    Cirean, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012). Selected Papers from IJCNN 2011CrossRefGoogle Scholar
  14. 14.
    Sermanet, P., LeCun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: 2011 International Joint Conference on Neural Networks (IJCNN), pp. 2809–2813 (2011)Google Scholar
  15. 15.
    Ferrari, V., Zisserman, A.: Learning visual attributes. In: Advances in Neural Information Processing Systems (2007)Google Scholar
  16. 16.
    Russakovsky, O., Fei-Fei, L.: Attribute learning in large-scale datasets. In: Kutulakos, K.N. (ed.) ECCV 2010 Workshops, Part I. LNCS, vol. 6553, pp. 1–14. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  17. 17.
    Cheng, K., Tan, X.: Sparse representations based attribute learning for flower classification. Neurocomputing 145, 416–426 (2014)CrossRefGoogle Scholar
  18. 18.
    Farhadi, A., Endres, I., Hoiem, D., Forsyth, D.: Describing objects by their attributes. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 1778–1785 (2009)Google Scholar
  19. 19.
    Rohrbach, M., Stark, M., Szarvas, G., Gurevych, I., Schiele, B.: What helps where – and why? Semantic relatedness for knowledge transfer. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)Google Scholar
  20. 20.
    Lampert, C., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 951–958 (2009)Google Scholar
  21. 21.
    Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19, pp. 801–808. MIT Press, Cambridge (2007)Google Scholar
  22. 22.
    Lin, M., Chen, Q., Yan, S.: Network in network. CoRR abs/1312.4400 (2013)Google Scholar
  23. 23.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  24. 24.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. ArXiv e-prints (2014)Google Scholar
  25. 25.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, pp. 1–13 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Hamed Habibi Aghdam
    • 1
    Email author
  • Elnaz Jahani Heravi
    • 1
  • Domenec Puig
    • 1
  1. 1.Department of Computer Engineering and MathematicsUniversity Rovira i VirgiliTarragonaSpain

Personalised recommendations